Bug 1866749

Summary: [RFE] provide warning for soft errors
Product: [oVirt] ovirt-engine Reporter: Dana <delfassy>
Component: ovirt-host-deploy-ansibleAssignee: Dana <delfassy>
Status: CLOSED CURRENTRELEASE QA Contact: Petr Matyáš <pmatyas>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.4.1.5CC: bugs, gdeolive, mperina, mtessun
Target Milestone: ovirt-4.4.5Keywords: FutureFeature
Target Release: ---Flags: pm-rhel: ovirt-4.4+
mtessun: planning_ack+
mperina: devel_ack+
gdeolive: testing_ack+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ovirt-engine-4.4.5.3 Doc Type: Enhancement
Doc Text:
Feature: Allow to set severity for messages that are displayed via ansible debug module Reason: When some tasks fail they don't stop the host deploy flow, but a debug message is printed in the host deploy log. However, host deploy log is mostly not looked at when host deploy finishes successfully, so such messages can be missed. Result: When using the debug module for ansible roles in the host deploy flow, a message that will be written in the format of: "[SEVERITY] message" where SEVERITY is one of {ERROR, WARNING, ALERT} will be parsed and printed in the audit log with its correct severity level.
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-03-18 15:15:10 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Infra RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1858935    

Description Dana 2020-08-06 09:20:08 UTC
Description of problem:
Add the ability to parse the prefix of a task name to handle warnings and errors which don't result in a playbook failure, but contain useful information that should be displayed in the auditlog

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Martin Perina 2020-08-06 14:41:25 UTC
I'd define this RFE in a more common way: provide ability to add task names to audit log in engine using specific log levels (WARNING, ERROR, INFO) for playbooks executed from engine.
Currently we add:

1. Successfully executed task names to audit log using INFO level
2. Task name, which failed execution using ERROR level (this works only to signal that playbook execution failed)

Within this RFE we would like also to add possibility to add task names using WARNING and ERROR level without interrupting execution of the playbook

Comment 2 Petr Matyáš 2021-02-15 11:42:40 UTC
What are the steps to verify this? Thought it is adding additional logging on failure, but after making yum upgrade to fail on a host and running host upgrade from UI, there are no additional information in audit(events) nor engine logs.

Comment 3 Dana 2021-02-15 12:48:53 UTC
The places in which it is currently in use are when executing "Configure LVM filter" task
I asked Nir how you can make it fail:

The easier way to cause a failure is to run "vdsm-tool config-lvm
filter" manually before the test, and then add another device to the filter in
/etc/lvm/lvm.conf. 
The next time you run "vdsm-tool config-lvm-filter" the tool will fail and ask to
change the rule manually.

Here is an example:

1. Running once to create a filter:

[root@host3 ~]# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:

  logical volume:  /dev/mapper/rhel-root
  mountpoint:      /
  devices:
/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f

  logical volume:  /dev/mapper/rhel-swap
  mountpoint:      [SWAP]
  devices:
/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f

This is the recommended LVM filter for this host:

  filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f$|",
"r|.*|" ]

This filter allows LVM to access the local devices used by the
hypervisor, but not shared storage owned by Vdsm. If you add a new
device to the volume group, you will need to edit the filter manually.

Configure host? [yes,NO] yes
Configuration completed successfully!

Please reboot to verify the configuration.

2. The filter added:

[root@host3 ~]# egrep '^filter =' /etc/lvm/lvm.conf
filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f$|",
"r|.*|"]

3. Edit it manually to:

[root@host3 ~]# egrep '^filter =' /etc/lvm/lvm.conf
filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f$|",
"a|/no/such/device|", "r|.*|"]

4.  Running again with modified rule

[root@host3 ~]# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:

  logical volume:  /dev/mapper/rhel-root
  mountpoint:      /
  devices:
/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f

  logical volume:  /dev/mapper/rhel-swap
  mountpoint:      [SWAP]
  devices:
/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f

This is the recommended LVM filter for this host:

  filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f$|",
"r|.*|" ]

This filter allows LVM to access the local devices used by the
hypervisor, but not shared storage owned by Vdsm. If you add a new
device to the volume group, you will need to edit the filter manually.

This is the current LVM filter:

  filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f$|",
"a|/no/such/device|", "r|.*|" ]

WARNING: The current LVM filter does not match the recommended filter,
Vdsm cannot configure the filter automatically.

Please edit /etc/lvm/lvm.conf and set the 'filter' option in the
'devices' section to the recommended value.

Make sure /etc/multipath/conf.d/vdsm_blacklist.conf is set with the
recommended 'blacklist' section.

It is recommended to reboot to verify the new configuration.

Comment 4 Petr Matyáš 2021-02-18 10:07:26 UTC
Verified on ovirt-engine-4.4.5.5-0.13.el8ev.noarch

Comment 5 Sandro Bonazzola 2021-03-18 15:15:10 UTC
This bugzilla is included in oVirt 4.4.5 release, published on March 18th 2021.

Since the problem described in this bug report should be resolved in oVirt 4.4.5 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.