Bug 1595536

Summary: [RFE] Support VMs with VNC console on a FIPS enabled hypervisor
Product: Red Hat Enterprise Virtualization Manager Reporter: nijin ashok <nashok>
Component: vdsmAssignee: Tomasz Barański <tbaransk>
Status: CLOSED ERRATA QA Contact: Liran Rotenberg <lrotenbe>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.2.3CC: ahadas, bugzilla-qe-rhv, gveitmic, hhaberma, jcall, lsurette, mavital, michal.skrivanek, mkalinin, mtessun, rob, skavishw, srevivo, tbaransk, usurse, ycui
Target Milestone: ovirt-4.4.1Keywords: FutureFeature, ZStream
Target Release: ---Flags: lrotenbe: testing_plan_complete+
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
When a host is running in FIPS mode, VNC must use SASL authorization instead of regular passwords because of a weak algorithm inherent to the VNC protocol. The current release facilitates using SASL by providing an Ansible role, ovirt-host-setup-vnc-sasl, which you can run manually on FIPS-enabled hosts. This role does the following: * Creates an empty SASL password database. * Prepares the SASL config file for qemu. * Changes the libvirt config file for qemu.
Story Points: ---
Clone Of:
: 1695567 (view as bug list) Environment:
Last Closed: 2020-08-04 13:26:06 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1640357, 1695567    
Attachments:
Description Flags
logs none

Description nijin ashok 2018-06-27 05:43:07 UTC
Description of problem:

A VM with a VNC console will fail to start on a host which is having fips enabled. It will fail with the error below.

2018-06-27 10:22:12,054+0530 ERROR (vm/e876d0c5) [virt.vm] (vmId='e876d0c5-6fa0-45e0-8a10-e44012a74f94') The vm start process failed (vm:943)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 872, in _startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2872, in _run
    dom.createWithFlags(flags)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in createWithFlags
    if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
libvirtError: internal error: process exited while connecting to monitor: 2018-06-27T04:52:11.931930Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future
2018-06-27T04:52:11.973768Z qemu-kvm: -vnc 10.65.177.137:0,password: Failed to start VNC server: VNC password auth disabled due to FIPS mode, consider using the VeNCrypt or SASL authentication methods as an alternative

If the host is operating in "FIPS mode", the VM will be created with "-enable-fips" which will disable the VNC password authentication. So the VM will fail to start with the error above.

The VM with a spice console will work fine.


Version-Release number of selected component (if applicable):

vdsm-4.20.27.2-1.el7ev.x86_64


How reproducible:

100%

Steps to Reproduce:
1. Create a FIPS compliant host.

cat /proc/sys/crypto/fips_enabled
1

2. Start a VM with VNC graphics console on this host.
 
3. This will fail with the error as mentioned above.


Actual results:

Not possible to start a VM with VNC console on a FIPS compliant host.


Expected results:

It should be possible to start a VM with VNC console on a FIPS compliant host.

Additional info:

Comment 1 Michal Skrivanek 2018-06-28 04:52:26 UTC
would require securing VNC first, and change authentication method form OTP to something else
Alternatively, we can ditch VNC for FIPS hosts

Comment 2 Ryan Barry 2019-01-21 14:53:36 UTC
Re-targeting to 4.3.1 since it is missing a patch, an acked blocker flag, or both

Comment 5 RHV bug bot 2019-03-29 11:14:54 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops

Comment 7 Liran Rotenberg 2019-04-02 12:42:53 UTC
Verification failed on:
ovirt-engine-4.3.3.1-0.1.el7.noarch
vdsm-4.30.12-1.el7ev.x86_64

Steps:
1. Enabled FIPS on the host
# yum -y install prelink dracut-fips
# prelink -u -a
# dracut -f
# df /boot
Take the Filesystem value (for example /dev/vda1 or /dev/sda1)
# blkid $filesystem
for example: # blkid /dev/sda1
Take the UUID for example: 21f4da90-4055-47e4-8971-763691191f14
Edit /etc/default/grub fips=1 and boot=$uuid:
GRUB_CMDLINE_LINUX="fips=1 boot=UUID=21f4da90-4055-47e4-8971-763691191f14 ....."
Regenerate grub, BIOS host:
# grub2-mkconfig -o /boot/grub2/grub.cfg
# reboot

2. Check FIPS enabled:
# sysctl crypto.fips_enabled
crypto.fips_enabled = 1
# cat /proc/sys/crypto/fips_enabled 
1

3. Run the new ansible playbook:
Copy ssh-key:
# ssh-copy-id -i <key_path> <user>@<host>
Edit /etc/ansible/hosts
Add:
<host> ansible_ssh_private_key_file=<path>
Run:
# ansible-playbook -l <host> /usr/share/ovirt-engine/playbooks/ovirt-vnc-sasl.yml
3. Edit a VM to VNC console.
4. Run the VM on the FIPS enabled host.

Results:
Run VM failed.
Engine log:
2019-04-02 15:30:58,045+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-11) [] EVENT_ID: VM_DOWN_ERROR(119), VM golden_env_mixed_virtio_0 is down with error
. Exit message: internal error: qemu unexpectedly closed the monitor: 2019-04-02T12:30:56.851807Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial 
NUMA mappings is obsoleted and will be removed in future
2019-04-02T12:30:56.878941Z qemu-kvm: -vnc 10.35.30.6:0,password,tls,x509=/etc/pki/vdsm/libvirt-vnc,sasl: Failed to start VNC server: VNC password auth disabled due to FIPS mode, consider using the VeNCrypt or S
ASL authentication methods as an alternative.
2019-04-02 15:30:58,045+03 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] add VM 'd77718bc-fe6d-472c-86ba-b88c5978d9a8'(golden_env_mixed_virtio_0) to rerun treatment
2019-04-02 15:30:58,051+03 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-11) [] Rerun VM 'd77718bc-fe6d-472c-86ba-b88c5978d9a8'. Called from VDS 'host_mixed_2'
2019-04-02 15:30:58,060+03 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-3309) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run 
VM golden_env_mixed_virtio_0 on Host host_mixed_2.

VDSM:
2019-04-02 15:30:57,721+0300 ERROR (vm/d77718bc) [virt.vm] (vmId='d77718bc-fe6d-472c-86ba-b88c5978d9a8') The vm start process failed (vm:937)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 866, in _startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2864, in _run
    dom.createWithFlags(flags)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1110, in createWithFlags
    if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
libvirtError: internal error: qemu unexpectedly closed the monitor: 2019-04-02T12:30:56.851807Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NU
MA mappings is obsoleted and will be removed in future
2019-04-02T12:30:56.878941Z qemu-kvm: -vnc 10.35.30.6:0,password,tls,x509=/etc/pki/vdsm/libvirt-vnc,sasl: Failed to start VNC server: VNC password auth disabled due to FIPS mode, consider using the VeNCrypt or S
ASL authentication methods as an alternative
2019-04-02 15:30:57,724+0300 INFO  (vm/d77718bc) [virt.vm] (vmId='d77718bc-fe6d-472c-86ba-b88c5978d9a8') Changed state to Down: internal error: qemu unexpectedly closed the monitor: 2019-04-02T12:30:56.851807Z q
emu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future
2019-04-02T12:30:56.878941Z qemu-kvm: -vnc 10.35.30.6:0,password,tls,x509=/etc/pki/vdsm/libvirt-vnc,sasl: Failed to start VNC server: VNC password auth disabled due to FIPS mode, consider using the VeNCrypt or S
ASL authentication methods as an alternative (code=1) (vm:1675)
2019-04-02 15:30:57,727+0300 INFO  (vm/d77718bc) [virt.vm] (vmId='d77718bc-fe6d-472c-86ba-b88c5978d9a8') Stopping connection (guestagent:455)

Additional information:
I suspect that we miss vdsm patch on 4.3 branch: https://gerrit.ovirt.org/#/c/97381/

Comment 8 Michal Skrivanek 2019-04-03 11:29:17 UTC
(In reply to Liran Rotenberg from comment #7)
> I suspect that we miss vdsm patch on 4.3 branch:
> https://gerrit.ovirt.org/#/c/97381/

indeed. too late for 4.3.3 unfortunately

Comment 10 Tomasz Barański 2019-04-03 11:58:44 UTC
Oh, shoot!

It will need both VDSM and Core patches backported.

Comment 16 Liran Rotenberg 2019-05-16 10:44:00 UTC
Verification failed on:
ovirt-engine-4.4.0-0.0.master.20190501171039.git320c7fe.el7.noarch
vdsm-4.40.0-238.gitaf61100.el7.x86_64

Steps:
1. Enabled FIPS on the host
# yum -y install prelink dracut-fips
# prelink -u -a
# dracut -f
# df /boot
Take the Filesystem value (for example /dev/vda1 or /dev/sda1)
# blkid $filesystem
for example: # blkid /dev/sda1
Take the UUID for example: 21f4da90-4055-47e4-8971-763691191f14
Edit /etc/default/grub fips=1 and boot=$uuid:
GRUB_CMDLINE_LINUX="fips=1 boot=UUID=21f4da90-4055-47e4-8971-763691191f14 ....."
Regenerate grub, BIOS host:
# grub2-mkconfig -o /boot/grub2/grub.cfg
# reboot

2. Check FIPS enabled:
# sysctl crypto.fips_enabled
crypto.fips_enabled = 1
# cat /proc/sys/crypto/fips_enabled 
1

3. Run the new ansible playbook:
Copy ssh-key:
# ssh-copy-id -i <key_path> <user>@<host>
Edit /etc/ansible/hosts
Add:
<host> ansible_ssh_private_key_file=<path>
Run:
# ansible-playbook -l <host> /usr/share/ovirt-engine/playbooks/ovirt-vnc-sasl.yml
3. Edit a VM to VNC console.
4. Run the VM on the FIPS enabled host.

Results:
Run VM failed.

Engine log:
2019-05-16 13:40:31,667+03 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-281937) [a660560a-0514-4baa-aab7-a1d3899f713d] EVENT_ID: USER_STARTED_VM(153), VM golden_env_mixed_virtio_0 was started by admin@internal-authz (Host: host_mixed_3).
2019-05-16 13:40:34,233+03 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-1) [] VM '1b4d96d9-a556-4d4c-84c8-a68221255ece' was reported as Down on VDS 'be687e98-06de-4bf7-b88b-87254b49af87'(host_mixed_3)
2019-05-16 13:40:34,234+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-1) [] START, DestroyVDSCommand(HostName = host_mixed_3, DestroyVmVDSCommandParameters:{hostId='be687e98-06de-4bf7-b88b-87254b49af87', vmId='1b4d96d9-a556-4d4c-84c8-a68221255ece', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 6c6a1f67
2019-05-16 13:40:34,532+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-1) [] FINISH, DestroyVDSCommand, return: , log id: 6c6a1f67
2019-05-16 13:40:34,532+03 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-1) [] VM '1b4d96d9-a556-4d4c-84c8-a68221255ece'(golden_env_mixed_virtio_0) moved from 'WaitForLaunch' --> 'Down'
2019-05-16 13:40:34,563+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-1) [] EVENT_ID: VM_DOWN_ERROR(119), VM golden_env_mixed_virtio_0 is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2019-05-16T10:40:33.537988Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future
2019-05-16T10:40:33.577415Z qemu-kvm: -vnc 10.35.30.3:0,password,tls,x509=/etc/pki/vdsm/libvirt-vnc,sasl: Failed to start VNC server: VNC password auth disabled due to FIPS mode, consider using the VeNCrypt or SASL authentication methods as an alternative.
2019-05-16 13:40:34,564+03 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-1) [] add VM '1b4d96d9-a556-4d4c-84c8-a68221255ece'(golden_env_mixed_virtio_0) to rerun treatment
2019-05-16 13:40:34,569+03 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-1) [] Rerun VM '1b4d96d9-a556-4d4c-84c8-a68221255ece'. Called from VDS 'host_mixed_3'
2019-05-16 13:40:34,576+03 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-281940) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM golden_env_mixed_virtio_0 on Host host_mixed_3.

VDSM log:
2019-05-16 13:40:34,228+0300 ERROR (vm/1b4d96d9) [virt.vm] (vmId='1b4d96d9-a556-4d4c-84c8-a68221255ece') The vm start process failed (vm:933)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 867, in _startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2880, in _run
    dom.createWithFlags(flags)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1110, in createWithFlags
    if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
libvirtError: internal error: qemu unexpectedly closed the monitor: 2019-05-16T10:40:33.537988Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NU
MA mappings is obsoleted and will be removed in future
2019-05-16T10:40:33.577415Z qemu-kvm: -vnc 10.35.30.3:0,password,tls,x509=/etc/pki/vdsm/libvirt-vnc,sasl: Failed to start VNC server: VNC password auth disabled due to FIPS mode, consider using the VeNCrypt or S
ASL authentication methods as an alternative
2019-05-16 13:40:34,228+0300 INFO  (vm/1b4d96d9) [virt.vm] (vmId='1b4d96d9-a556-4d4c-84c8-a68221255ece') Changed state to Down: internal error: qemu unexpectedly closed the monitor: 2019-05-16T10:40:33.537988Z q
emu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future
2019-05-16T10:40:33.577415Z qemu-kvm: -vnc 10.35.30.3:0,password,tls,x509=/etc/pki/vdsm/libvirt-vnc,sasl: Failed to start VNC server: VNC password auth disabled due to FIPS mode, consider using the VeNCrypt or S
ASL authentication methods as an alternative (code=1) (vm:1690)

Additional info:
I saw vnc_sasl=1 added to the host after running the ansible, in /etc/libvirt/qemu.conf.

Comment 17 Ryan Barry 2019-05-16 11:21:15 UTC
Logs?

Comment 18 Liran Rotenberg 2019-05-16 11:37:30 UTC
Created attachment 1569506 [details]
logs

Comment 19 Tomasz Barański 2019-05-20 11:44:38 UTC
The vdsm does not send FIPS information back to the engine. The engine uses Host's kernel parameters to tell whether FIPS mode is on or off. In order to correctly construct XML for libvirt, the "FIPS enabled" option on the Host option panes must be checked.

Comment 20 Liran Rotenberg 2019-05-21 10:27:06 UTC
Verified on:
ovirt-engine-4.4.0-0.0.master.20190519192123.gitd51360f.el7.noarch
vdsm-4.30.15-1.el7.x86_64

Steps:
1. Enabled FIPS on the host
# yum -y install prelink dracut-fips
# prelink -u -a
# dracut -f
# df /boot
Take the Filesystem value (for example /dev/vda1 or /dev/sda1)
# blkid $filesystem
for example: # blkid /dev/sda1
Take the UUID for example: 21f4da90-4055-47e4-8971-763691191f14
Edit /etc/default/grub fips=1 and boot=$uuid:
GRUB_CMDLINE_LINUX="fips=1 boot=UUID=21f4da90-4055-47e4-8971-763691191f14 ....."
Regenerate grub, BIOS host:
# grub2-mkconfig -o /boot/grub2/grub.cfg
# reboot

2. Check FIPS enabled:
# sysctl crypto.fips_enabled
crypto.fips_enabled = 1
# cat /proc/sys/crypto/fips_enabled 
1

3. Set FIPS enbaled in the engine (accordingly, it possible not to add fips=1 to the kernel and redeploy+reboot the host after this step) 
Compute->Hosts->Edit host->Kernel->FIPS mode

4. Run the new ansible playbook:
Copy ssh-key:
# ssh-copy-id -i <key_path> <user>@<host>
Edit /etc/ansible/hosts
Add:
<host> ansible_ssh_private_key_file=<path>
Run:
# ansible-playbook -l <host> /usr/share/ovirt-engine/playbooks/ovirt-vnc-sasl.yml
5. Edit a VM to VNC console.
6. Run the VM on the FIPS enabled host.

Results:
Run VM succeed.

Additional information:
The host must be set as VNC Encrypted.

Comment 21 John Call 2019-10-01 05:45:33 UTC
(In reply to Liran Rotenberg from comment #20)
I don't understand this, could you please clarify?  I discovered that a host having 'fips=1' configured outside of RHVM won't start a VNC-based VM.  The host **must** use RHVM GUI (Host Edit screens) to set 'fips=1'
> 3. Set FIPS enbaled in the engine (accordingly, it possible not to add
> fips=1 to the kernel and redeploy+reboot the host after this step) 
> Compute->Hosts->Edit host->Kernel->FIPS mode

Can you describe how this is done?  I saw a checkbox for enabling VNC encryption in the Edit Cluster screens, but my host's capabilities still report vncEncryped=False
> Additional information:
> The host must be set as VNC Encrypted.

Comment 22 Liran Rotenberg 2019-10-02 06:58:19 UTC
(In reply to John Call from comment #21)
> (In reply to Liran Rotenberg from comment #20)
> I don't understand this, could you please clarify?  I discovered that a host
> having 'fips=1' configured outside of RHVM won't start a VNC-based VM.  The
> host **must** use RHVM GUI (Host Edit screens) to set 'fips=1'
> > 3. Set FIPS enbaled in the engine (accordingly, it possible not to add
> > fips=1 to the kernel and redeploy+reboot the host after this step) 
> > Compute->Hosts->Edit host->Kernel->FIPS mode
> 
The first thing you need to have is VNC encryption.
The related bug and steps are in here: https://bugzilla.redhat.com/show_bug.cgi?id=1597085#c20
In small details, vnc_tls=1 need to be set in qemu.conf (but not only, therefore the safest way is to re-install the host). This is done when the cluster is set to VNC encryption on.
(Compute->Cluster->Console->Enable VNC Encryption)
When it is set, you need to redeploy the host/s in the cluster, the re-installation of the host will set the parameters.
On top of that, you can have the host in fips mode, using the above ansible role and have the VMs with VNC on FIPS enabled hypervisor.

> Can you describe how this is done?  I saw a checkbox for enabling VNC
> encryption in the Edit Cluster screens, but my host's capabilities still
> report vncEncryped=False
> > Additional information:
> > The host must be set as VNC Encrypted.

Comment 23 RHV bug bot 2019-10-22 17:25:45 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 24 RHV bug bot 2019-10-22 17:39:36 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 25 RHV bug bot 2019-10-22 17:46:47 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 26 RHV bug bot 2019-10-22 18:02:36 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 27 RHV bug bot 2019-11-19 11:52:55 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 28 RHV bug bot 2019-11-19 12:02:59 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 30 RHV bug bot 2019-12-13 13:17:28 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 31 RHV bug bot 2019-12-20 17:46:38 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 32 RHV bug bot 2020-01-08 14:50:06 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 33 RHV bug bot 2020-01-08 15:20:24 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 34 RHV bug bot 2020-01-24 19:51:50 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 35 Rob Sanders 2020-01-28 12:07:47 UTC
While FIPS console works now, I still cannot use it via novnc and websocket proxy service. It works fine without FIPS.

failed: Error during WebSocket handshake: Unexpected response code: 503
checkConnection @ html-console-common.js:26

Comment 36 Michal Skrivanek 2020-03-02 17:33:13 UTC
(In reply to Rob Sanders from comment #35)
> While FIPS console works now, I still cannot use it via novnc and websocket
> proxy service. It works fine without FIPS.
> 
> failed: Error during WebSocket handshake: Unexpected response code: 503
> checkConnection @ html-console-common.js:26

it's verified working so if it doesn't work for you please open a new bug and/or ask first on mailing list with details/logs/etc

Comment 41 errata-xmlrpc 2020-08-04 13:26:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHV RHEL Host (ovirt-host) 4.4), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:3246