Description of problem: Add the ability to parse the prefix of a task name to handle warnings and errors which don't result in a playbook failure, but contain useful information that should be displayed in the auditlog Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
I'd define this RFE in a more common way: provide ability to add task names to audit log in engine using specific log levels (WARNING, ERROR, INFO) for playbooks executed from engine. Currently we add: 1. Successfully executed task names to audit log using INFO level 2. Task name, which failed execution using ERROR level (this works only to signal that playbook execution failed) Within this RFE we would like also to add possibility to add task names using WARNING and ERROR level without interrupting execution of the playbook
What are the steps to verify this? Thought it is adding additional logging on failure, but after making yum upgrade to fail on a host and running host upgrade from UI, there are no additional information in audit(events) nor engine logs.
The places in which it is currently in use are when executing "Configure LVM filter" task I asked Nir how you can make it fail: The easier way to cause a failure is to run "vdsm-tool config-lvm filter" manually before the test, and then add another device to the filter in /etc/lvm/lvm.conf. The next time you run "vdsm-tool config-lvm-filter" the tool will fail and ask to change the rule manually. Here is an example: 1. Running once to create a filter: [root@host3 ~]# vdsm-tool config-lvm-filter Analyzing host... Found these mounted logical volumes on this host: logical volume: /dev/mapper/rhel-root mountpoint: / devices: /dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f logical volume: /dev/mapper/rhel-swap mountpoint: [SWAP] devices: /dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f This is the recommended LVM filter for this host: filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f$|", "r|.*|" ] This filter allows LVM to access the local devices used by the hypervisor, but not shared storage owned by Vdsm. If you add a new device to the volume group, you will need to edit the filter manually. Configure host? [yes,NO] yes Configuration completed successfully! Please reboot to verify the configuration. 2. The filter added: [root@host3 ~]# egrep '^filter =' /etc/lvm/lvm.conf filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f$|", "r|.*|"] 3. Edit it manually to: [root@host3 ~]# egrep '^filter =' /etc/lvm/lvm.conf filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f$|", "a|/no/such/device|", "r|.*|"] 4. Running again with modified rule [root@host3 ~]# vdsm-tool config-lvm-filter Analyzing host... Found these mounted logical volumes on this host: logical volume: /dev/mapper/rhel-root mountpoint: / devices: /dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f logical volume: /dev/mapper/rhel-swap mountpoint: [SWAP] devices: /dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f This is the recommended LVM filter for this host: filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f$|", "r|.*|" ] This filter allows LVM to access the local devices used by the hypervisor, but not shared storage owned by Vdsm. If you add a new device to the volume group, you will need to edit the filter manually. This is the current LVM filter: filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-3Ab8aa-CXcT-cEmn-E6tb-n8da-cVTp-ymKn5f$|", "a|/no/such/device|", "r|.*|" ] WARNING: The current LVM filter does not match the recommended filter, Vdsm cannot configure the filter automatically. Please edit /etc/lvm/lvm.conf and set the 'filter' option in the 'devices' section to the recommended value. Make sure /etc/multipath/conf.d/vdsm_blacklist.conf is set with the recommended 'blacklist' section. It is recommended to reboot to verify the new configuration.
Verified on ovirt-engine-4.4.5.5-0.13.el8ev.noarch
This bugzilla is included in oVirt 4.4.5 release, published on March 18th 2021. Since the problem described in this bug report should be resolved in oVirt 4.4.5 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.