Bug 1661037

Summary: Filesytem full confusing output when bind mounts are used
Product: Red Hat Hybrid Cloud Console (console.redhat.com) Reporter: Amar Huchchanavar <ahuchcha>
Component: Insights - RulesAssignee: Flos Qi Guo <qguo>
Status: CLOSED CURRENTRELEASE QA Contact: Jeff Needle <jneedle>
Severity: medium Docs Contact: Kevin Blake <kblake>
Priority: medium    
Version: unspecifiedCC: dajohnso, jnewton, robwilli, xiaoxwan
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-01-07 05:55:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Amar Huchchanavar 2018-12-19 22:16:23 UTC
Description of problem:
Rule Name: Decreased stability and/or performance due to filesystem over 95% capacity

The filesystem full action included duplicate entries if bind mounts are used. See screenshot.

Version-Release number of selected component (if applicable):
Insights

How reproducible:
Always

Steps to Reproduce:
Reproducer:
- Create fileystem /fullfs of 5GB
- Create 10 directorues /fullfs/bindmountX
- Create 10 bindmounts to /fullfs/bindmountX
- Crate huge file of 4.95GB in /fullfs to make it full above 95%
- Run insights-client

Actual results:
Please check the attached screenshots.

Expected results:
Either we show one entry for the actual file system or create a diff section for  bind mounts.  

Additional info:
Local Reproducer:

#df -h
Filesystem                        Size  Used Avail Use% Mounted on
/dev/mapper/rhel_vm250--198-root   13G   11G  1.7G  87% /
devtmpfs                          1.9G     0  1.9G   0% /dev
tmpfs                             1.9G     0  1.9G   0% /dev/shm
tmpfs                             1.9G   17M  1.9G   1% /run
tmpfs                             1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1                        1014M  143M  872M  15% /boot
tmpfs                             379M     0  379M   0% /run/user/0
/dev/mapper/newvg-newlv           2.9G  2.7G   58M  98% /fullfs  <<<<<<


#du -sh /fullfs/*
4.0K	/fullfs/dir1
4.0K	/fullfs/dir2
4.0K	/fullfs/dir3
4.0K	/fullfs/dir4
4.0K	/fullfs/dir5
16K	/fullfs/lost+found
501M	/fullfs/test2.img
501M	/fullfs/test3.img
501M	/fullfs/test4.img
200M	/fullfs/test5.img
1.1G	/fullfs/test.img



# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Mon Sep  3 07:48:11 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel_vm250--198-root /                       xfs     defaults        0 0
UUID=9bd3db7f-2a89-4852-b65a-240570ab7635 /boot                   xfs     defaults        0 0
/dev/mapper/rhel_vm250--198-swap swap                    swap    defaults        0 0
/dev/mapper/newvg-newlv		/fullfs			ext4	defaults 	0 0 


/fullfs/dir1	/fullfs/dir11		none 	bind
/fullfs/dir2    /fullfs/dir22           none    bind
/fullfs/dir3    /fullfs/dir33           none    bind
/fullfs/dir4    /fullfs/dir44           none    bind
/fullfs/dir5    /fullfs/dir55           none    bind



Insights rule results:

Detected issues
This host has the following file system(s) nearing or at capacity/inode usage:
Filesystem 	Used Capacity % 	Used INode %
/fullfs 	98% 	-
/fullfs/dir11 	98% 	-
/fullfs/dir22 	98% 	-
/fullfs/dir33 	98% 	-
/fullfs/dir44 	98% 	-
/fullfs/dir55 	98% 	-

Comment 5 Flos Qi Guo 2019-01-07 05:55:40 UTC
The fix is released in the latest version of Insights plugins. Closed this bug.