Bug 1866749 - [RFE] provide warning for soft errors
Summary: [RFE] provide warning for soft errors
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: ovirt-host-deploy-ansible
Version: 4.4.1.5
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ovirt-4.4.5
: ---
Assignee: Dana
QA Contact: Petr Matyáš
URL:
Whiteboard:
Depends On:
Blocks: 1858935
TreeView+ depends on / blocked
 
Reported: 2020-08-06 09:20 UTC by Dana
Modified: 2021-03-18 15:15 UTC (History)
4 users (show)

Fixed In Version: ovirt-engine-4.4.5.3
Clone Of:
Environment:
Last Closed: 2021-03-18 15:15:10 UTC
oVirt Team: Infra
Embargoed:
pm-rhel: ovirt-4.4+
mtessun: planning_ack+
mperina: devel_ack+
gdeolive: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 112853 0 master MERGED ansible: provide warning for soft errors 2021-02-17 09:10:09 UTC
oVirt gerrit 113121 0 master MERGED ansible: provide warning for soft errors 2021-02-17 09:10:09 UTC

Description Dana 2020-08-06 09:20:08 UTC
Description of problem:
Add the ability to parse the prefix of a task name to handle warnings and errors which don't result in a playbook failure, but contain useful information that should be displayed in the auditlog

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Martin Perina 2020-08-06 14:41:25 UTC
I'd define this RFE in a more common way: provide ability to add task names to audit log in engine using specific log levels (WARNING, ERROR, INFO) for playbooks executed from engine.
Currently we add:

1. Successfully executed task names to audit log using INFO level
2. Task name, which failed execution using ERROR level (this works only to signal that playbook execution failed)

Within this RFE we would like also to add possibility to add task names using WARNING and ERROR level without interrupting execution of the playbook

Comment 2 Petr Matyáš 2021-02-15 11:42:40 UTC
What are the steps to verify this? Thought it is adding additional logging on failure, but after making yum upgrade to fail on a host and running host upgrade from UI, there are no additional information in audit(events) nor engine logs.

Comment 3 Dana 2021-02-15 12:48:53 UTC
The places in which it is currently in use are when executing "Configure LVM filter" task
I asked Nir how you can make it fail:

The easier way to cause a failure is to run "vdsm-tool config-lvm
filter" manually before the test, and then add another device to the filter in
/etc/lvm/lvm.conf. 
The next time you run "vdsm-tool config-lvm-filter" the tool will fail and ask to
change the rule manually.

Here is an example:

1. Running once to create a filter:

[root@host3 ~]# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:

  logical volume:  /dev/mapper/rhel-root
  mountpoint:      /
  devices:
/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f

  logical volume:  /dev/mapper/rhel-swap
  mountpoint:      [SWAP]
  devices:
/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f

This is the recommended LVM filter for this host:

  filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f$|",
"r|.*|" ]

This filter allows LVM to access the local devices used by the
hypervisor, but not shared storage owned by Vdsm. If you add a new
device to the volume group, you will need to edit the filter manually.

Configure host? [yes,NO] yes
Configuration completed successfully!

Please reboot to verify the configuration.

2. The filter added:

[root@host3 ~]# egrep '^filter =' /etc/lvm/lvm.conf
filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f$|",
"r|.*|"]

3. Edit it manually to:

[root@host3 ~]# egrep '^filter =' /etc/lvm/lvm.conf
filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f$|",
"a|/no/such/device|", "r|.*|"]

4.  Running again with modified rule

[root@host3 ~]# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:

  logical volume:  /dev/mapper/rhel-root
  mountpoint:      /
  devices:
/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f

  logical volume:  /dev/mapper/rhel-swap
  mountpoint:      [SWAP]
  devices:
/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f

This is the recommended LVM filter for this host:

  filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f$|",
"r|.*|" ]

This filter allows LVM to access the local devices used by the
hypervisor, but not shared storage owned by Vdsm. If you add a new
device to the volume group, you will need to edit the filter manually.

This is the current LVM filter:

  filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f$|",
"a|/no/such/device|", "r|.*|" ]

WARNING: The current LVM filter does not match the recommended filter,
Vdsm cannot configure the filter automatically.

Please edit /etc/lvm/lvm.conf and set the 'filter' option in the
'devices' section to the recommended value.

Make sure /etc/multipath/conf.d/vdsm_blacklist.conf is set with the
recommended 'blacklist' section.

It is recommended to reboot to verify the new configuration.

Comment 4 Petr Matyáš 2021-02-18 10:07:26 UTC
Verified on ovirt-engine-4.4.5.5-0.13.el8ev.noarch

Comment 5 Sandro Bonazzola 2021-03-18 15:15:10 UTC
This bugzilla is included in oVirt 4.4.5 release, published on March 18th 2021.

Since the problem described in this bug report should be resolved in oVirt 4.4.5 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.