Description of problem: We have additional filesystems that we are currently not monitoring and raising events when they are low on free space: /tmp /var/log /var/log/audit /home See: https://github.com/ManageIQ/manageiq-appliance-build/pull/51 See: https://github.com/ManageIQ/manageiq/blob/fbb23b0ded3a11d49021ffe2a65d2e15e3165c02/app/models/miq_server/environment_management.rb#L133-L137 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
The code used for checking the filesystems for high usage is currently looking for filesystems mounted at "/var/www/miq" and "/var/www/miq/vmdb/log" as well as "/" and anything matching the ruby regex "/pgsql\/data/". We don't have file systems mounted at either "/var/www/miq" or "/var/www/miq/vmdb/log". Which filesystems (and/or mount points) do we actually want to be monitoring here? Our current options are: Filesystem: Mount Point: /dev/mapper/vg_system-lv_os / /dev/sda1 /boot /dev/mapper/vg_system-lv_home /home /dev/mapper/vg_system-lv_var /var /dev/sda3 /var/www/miq_tmp /dev/mapper/vg_data-lv_pg /var/opt/rh/rh-postgresql94/lib/pgsql/data /dev/mapper/vg_system-lv_tmp /tmp /dev/mapper/vg_system-lv_var_log /var/log /dev/mapper/vg_system-lv_var_log_audit /var/log/audit
https://github.com/ManageIQ/manageiq/pull/5551
New commit detected on ManageIQ/manageiq/master: https://github.com/ManageIQ/manageiq/commit/56faf253bcdcf05522f9275aa3b742ef91e60cc4 commit 56faf253bcdcf05522f9275aa3b742ef91e60cc4 Author: Nick Carboni <ncarboni> AuthorDate: Fri Nov 20 16:24:57 2015 -0500 Commit: Nick Carboni <ncarboni> CommitDate: Fri Nov 20 16:24:57 2015 -0500 Added usage events for new appliance filesystems https://bugzilla.redhat.com/show_bug.cgi?id=1281563 app/models/miq_server/environment_management.rb | 28 ++++++++++++++----------- db/fixtures/miq_event_definitions.csv | 9 ++++++-- 2 files changed, 23 insertions(+), 14 deletions(-)
Please add verification steps.
Set the :disk_usage_gt_percent setting under the :server -> :events keys to something under the usage of one of your appliances filesystems (I used 5%) After some time you should see events raised for each filesystem over the newly configured usage in the Control -> log screen similar to: [----] I, [2016-04-28T09:05:04.708900 #2956:389988] INFO -- : MIQ(policy-enforce_policy): Event: [evm_server_system_disk_high_usage], To: [EVM] [----] I, [2016-04-28T09:05:04.876478 #2956:389988] INFO -- : MIQ(policy-enforce_policy): Event: [evm_server_boot_disk_high_usage], To: [EVM] [----] I, [2016-04-28T09:05:05.033416 #2956:389988] INFO -- : MIQ(policy-enforce_policy): Event: [evm_server_home_disk_high_usage], To: [EVM] [----] I, [2016-04-28T09:05:05.192526 #2956:389988] INFO -- : MIQ(policy-enforce_policy): Event: [evm_server_var_disk_high_usage], To: [EVM] [----] I, [2016-04-28T09:05:05.353944 #2956:389988] INFO -- : MIQ(policy-enforce_policy): Event: [evm_server_var_log_audit_disk_high_usage], To: [EVM]
After changing value for :disk_usage_gt_percent to 5 (5%), the logs are able to generate after events raised for each filesystem over the newly configured usage in Control --> log screen. Here is snippet: [----] I, [2016-05-02T04:55:17.593828 #2969:d59990] INFO -- : MIQ(policy-enforce_policy): Event: [evm_server_var_disk_high_usage], To: [EVM] [----] I, [2016-05-02T04:55:17.605823 #2975:d59990] INFO -- : MIQ(policy-enforce_policy): Event: [evm_server_system_disk_high_usage], To: [EVM] [----] I, [2016-05-02T04:55:26.377661 #2975:d59990] INFO -- : MIQ(policy-enforce_policy): Event: [evm_server_var_log_audit_disk_high_usage], To: [EVM] [----] I, [2016-05-02T04:55:26.900091 #2975:d59990] INFO -- : MIQ(policy-enforce_policy): Event: [evm_server_boot_disk_high_usage], To: [EVM] [----] I, [2016-05-02T05:00:17.092385 #2975:d59990] INFO -- : MIQ(policy-enforce_policy): Event: [evm_server_system_disk_high_usage], To: [EVM] Verified Version :-5.6.0.4-beta2.3.20160421172650_719e256
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1348