Hide Forgot
Created attachment 1486444 [details] alert in UI Description of problem: Receiving alerts on this cluster: `device tmpfs on node 172.31.17.171:9100 is running full within the next 24 hours (mounted at /host/root/run/user/0)` where /host/root/run/user/0 is completely empty. But if I check this mount, it is completely empty: [root@ip-172-31-17-171 ~]# df -h | grep /run/user tmpfs 3.2G 0 3.2G 0% /run/user/0 Version-Release number of selected component (if applicable): oc v3.11.0-0.21.0 kubernetes v1.11.0+d4cacc0 features: Basic-Auth GSSAPI Kerberos SPNEGO How reproducible: Steady state on cluster at present Actual results: Alert is presently firing. Expected results: There is no danger of this partition filling. The prediction seems inaccurate. Additional info:
Created attachment 1486445 [details] [6d] listing for mountpoint
Created attachment 1486459 [details] listing showing actual < 0 mountpoint Actual mountpoint seems to be: {device="/dev/mapper/rootvg-var_log",endpoint="https",fstype="xfs",instance="172.31.17.171:9100",job="node-exporter",mountpoint="/host/root/var/log",namespace="openshift-monitoring",pod="node-exporter-gs6rv",service="node-exporter"} -145849006.69107854
https://github.com/openshift/cluster-monitoring-operator/pull/173 pulled in the changes to appropriately ignore a device such as the one reported here. This will land in 4.0.
Issue is fixed with 4.0.0-0.nightly-2019-03-06-074438
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758