Bug 1661037 - Filesytem full confusing output when bind mounts are used
Summary: Filesytem full confusing output when bind mounts are used
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Cloud Software Services (cloud.redhat.com)
Classification: Red Hat
Component: Insights - Rules
Version: unspecified
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Qi Guo [Flos]
QA Contact: Jeff Needle
Kevin Blake
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-19 22:16 UTC by Amar Huchchanavar
Modified: 2019-11-01 03:51 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-07 05:55:40 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Amar Huchchanavar 2018-12-19 22:16:23 UTC
Description of problem:
Rule Name: Decreased stability and/or performance due to filesystem over 95% capacity

The filesystem full action included duplicate entries if bind mounts are used. See screenshot.

Version-Release number of selected component (if applicable):
Insights

How reproducible:
Always

Steps to Reproduce:
Reproducer:
- Create fileystem /fullfs of 5GB
- Create 10 directorues /fullfs/bindmountX
- Create 10 bindmounts to /fullfs/bindmountX
- Crate huge file of 4.95GB in /fullfs to make it full above 95%
- Run insights-client

Actual results:
Please check the attached screenshots.

Expected results:
Either we show one entry for the actual file system or create a diff section for  bind mounts.  

Additional info:
Local Reproducer:

#df -h
Filesystem                        Size  Used Avail Use% Mounted on
/dev/mapper/rhel_vm250--198-root   13G   11G  1.7G  87% /
devtmpfs                          1.9G     0  1.9G   0% /dev
tmpfs                             1.9G     0  1.9G   0% /dev/shm
tmpfs                             1.9G   17M  1.9G   1% /run
tmpfs                             1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1                        1014M  143M  872M  15% /boot
tmpfs                             379M     0  379M   0% /run/user/0
/dev/mapper/newvg-newlv           2.9G  2.7G   58M  98% /fullfs  <<<<<<


#du -sh /fullfs/*
4.0K	/fullfs/dir1
4.0K	/fullfs/dir2
4.0K	/fullfs/dir3
4.0K	/fullfs/dir4
4.0K	/fullfs/dir5
16K	/fullfs/lost+found
501M	/fullfs/test2.img
501M	/fullfs/test3.img
501M	/fullfs/test4.img
200M	/fullfs/test5.img
1.1G	/fullfs/test.img



# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Mon Sep  3 07:48:11 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel_vm250--198-root /                       xfs     defaults        0 0
UUID=9bd3db7f-2a89-4852-b65a-240570ab7635 /boot                   xfs     defaults        0 0
/dev/mapper/rhel_vm250--198-swap swap                    swap    defaults        0 0
/dev/mapper/newvg-newlv		/fullfs			ext4	defaults 	0 0 


/fullfs/dir1	/fullfs/dir11		none 	bind
/fullfs/dir2    /fullfs/dir22           none    bind
/fullfs/dir3    /fullfs/dir33           none    bind
/fullfs/dir4    /fullfs/dir44           none    bind
/fullfs/dir5    /fullfs/dir55           none    bind



Insights rule results:

Detected issues
This host has the following file system(s) nearing or at capacity/inode usage:
Filesystem 	Used Capacity % 	Used INode %
/fullfs 	98% 	-
/fullfs/dir11 	98% 	-
/fullfs/dir22 	98% 	-
/fullfs/dir33 	98% 	-
/fullfs/dir44 	98% 	-
/fullfs/dir55 	98% 	-

Comment 5 Qi Guo [Flos] 2019-01-07 05:55:40 UTC
The fix is released in the latest version of Insights plugins. Closed this bug.


Note You need to log in before you can comment on or make changes to this bug.