Bug 1926018

Summary: Failed to run VM after FIPS mode is enabled
Product: Red Hat Enterprise Virtualization Manager Reporter: cshao <cshao>
Component: ovirt-engineAssignee: Asaf Rachmani <arachman>
Status: CLOSED ERRATA QA Contact: cshao <cshao>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.4.4CC: ahadas, arachman, cshao, dfodor, lsvaty, mavital, michal.skrivanek, peyu, sbonazzo, shlei, weiwang, yaniwang
Target Milestone: ovirt-4.4.6Keywords: ZStream
Target Release: 4.4.6Flags: cshao: testing_plan_complete+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ovirt-engine-4.4.6.5 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-06-01 13:22:11 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
rhvh + engine logs
none
fips-vm-failed-screenshot none

Description cshao 2021-02-08 03:31:13 UTC
Created attachment 1755593 [details]
rhvh + engine logs

Description of problem:
Failed to run VM after FIPS mode is enabled.

# fips-mode-setup --enable
Setting system policy to FIPS
Note: System-wide crypto policies are applied on application start-up.
It is recommended to restart the system for the change of policies
to fully take place.
FIPS mode will be enabled.
Please reboot the system for the setting to take effect.

reboot

# fips-mode-setup --check
FIPS mode is enabled.


rhvh.log
============
Feb  8 03:03:58 hp-bl460cg9-01 vdsm[6127]: ERROR FINISH create error=Error creating the requested VM#012Traceback (most recent call last):#012  File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 124, in method#012    ret = func(*args, **kwargs)#012  File "/usr/lib/python3.6/site-packages/vdsm/API.py", line 228, in create#012    "A VM is not secure: VNC has no password and SASL "#012vdsm.common.exception.CannotCreateVM: Error creating the requested VM
============


Version-Release number of selected component (if applicable):
redhat-virtualization-host-4.4.5-20210204.0.el8_3
kernel-4.18.0-240.15.1.el8_3.x86_64
imgbased-1.2.16-0.1.el8ev.noarch

Engine: 4.4.5-4

How reproducible:
100%

Steps to Reproduce:
1. Install RHVH via anaconda GUI.
2. Enable fips mode by run "fips-mode-setup --enable"
3. Reboot
4. Register RHVH to Engine
5. Add storage domain
6. Create VM

Actual results:
Failed to run VM after FIPS mode is enabled.

Expected results:
Can run VM succeed after FIPS mode is enabled.

Additional info:
No such issue after FIPS mode is disabled.

Comment 1 cshao 2021-02-08 03:32:26 UTC
Created attachment 1755594 [details]
fips-vm-failed-screenshot

Comment 5 Michal Skrivanek 2021-04-12 16:14:33 UTC
Indeed on the actual host SASL is not enabled hence the VM fails to start
seems like the SASL enablement steps didn't work or it was not run correctly. Can you attach output of that VNC SASL playbook?

Comment 6 cshao 2021-04-13 02:06:27 UTC
(In reply to Michal Skrivanek from comment #5)
> Indeed on the actual host SASL is not enabled hence the VM fails to start
> seems like the SASL enablement steps didn't work or it was not run
> correctly. Can you attach output of that VNC SASL playbook?


# ansible-playbook --ask-pass --inventory=10.73.32.8, ovirt-vnc-sasl.yml
SSH password: 

PLAY [all] ***********************************************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************************
ok: [10.73.32.8]

TASK [ovirt-host-setup-vnc-sasl : Create SASL QEMU config file] ******************************************************************************************************************************
changed: [10.73.32.8]

TASK [ovirt-host-setup-vnc-sasl : Use saslpasswd2 to create file with dummy user] ************************************************************************************************************
changed: [10.73.32.8]

TASK [ovirt-host-setup-vnc-sasl : Set ownership of the password db] **************************************************************************************************************************
changed: [10.73.32.8]

TASK [ovirt-host-setup-vnc-sasl : Modify qemu config file - enable VNC SASL authentication] **************************************************************************************************
changed: [10.73.32.8]

RUNNING HANDLER [ovirt-host-setup-vnc-sasl : restart libvirtd] *******************************************************************************************************************************
fatal: [10.73.32.8]: FAILED! => {}

MSG:

The conditional check 'services_in_vnc_sasl['ansible_facts']['services'].get('libvirtd.service', {}).get('state') == 'running'' failed. The error was: error while evaluating conditional (services_in_vnc_sasl['ansible_facts']['services'].get('libvirtd.service', {}).get('state') == 'running'): 'services_in_vnc_sasl' is undefined

The error appears to be in '/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-setup-vnc-sasl/handlers/main.yml': line 7, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

# already running.
- name: restart libvirtd
  ^ here


NO MORE HOSTS LEFT ***************************************************************************************************************************************************************************

PLAY RECAP ***********************************************************************************************************************************************************************************
10.73.32.8                 : ok=5    changed=4    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

#

Comment 7 Asaf Rachmani 2021-04-13 07:07:14 UTC
I ran the same command on the same machine and it works fine:

# ansible-playbook --ask-pass --inventory=10.73.32.8, ovirt-vnc-sasl.yml
SSH password:
 
PLAY [all] ***********************************************************************************************************************************************************************************************************************************
 
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************
ok: [10.73.32.8]
 
TASK [ovirt-host-setup-vnc-sasl : Create SASL QEMU config file] ******************************************************************************************************************************************************************************
ok: [10.73.32.8]
 
TASK [ovirt-host-setup-vnc-sasl : Use saslpasswd2 to create file with dummy user] ************************************************************************************************************************************************************
ok: [10.73.32.8]
 
TASK [ovirt-host-setup-vnc-sasl : Set ownership of the password db] **************************************************************************************************************************************************************************
ok: [10.73.32.8]
 
TASK [ovirt-host-setup-vnc-sasl : Modify qemu config file - enable VNC SASL authentication] **************************************************************************************************************************************************
ok: [10.73.32.8]
 
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
10.73.32.8                 : ok=5    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0


But this time I see that the "RUNNING HANDLER [ovirt-host-setup-vnc-sasl : restart libvirtd]" didn't execute.
Checked the code and I see "Modify qemu config file - enable VNC SASL authentication" task call the HANDLER - "restart libvirtd":

- name: Modify qemu config file - enable VNC SASL authentication
  lineinfile:
    path: '/etc/libvirt/qemu.conf'
    state: present
    line: 'vnc_sasl=1'
  notify:
    restart libvirtd


The handler file contains two tasks, and only "restart libvirtd" task runs:

- name: populate service facts for sasl libvirtd restart
  service_facts:
  register: services_in_vnc_sasl

# libvirtd may not be started automatically on hosts >= 4.4 if not
# already running.
- name: restart libvirtd
  service:
    name: libvirtd
    state: restarted
  when: "services_in_vnc_sasl['ansible_facts']['services'].get('libvirtd.service', {}).get('state') == 'running'"
  listen: "restart libvirtd"

Comment 13 errata-xmlrpc 2021-06-01 13:22:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: RHV Manager security update (ovirt-engine) [ovirt-4.4.6]), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2179