Description of problem: Currently when you have mounted the cephfs, it reflects GLOBAL statistics: # ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 8828G 7233G 1594G 18.06 <-------currently reflects this row POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 1058 0 3210G 5 vms 3 0 0 3210G 0 cephfs 4 784G 17.78 3210G 414252 <------------ [RFE]reflect this row cephfs-metadata 5 54607k 0 3210G 8659 current output: fuse-client $ df -h Filesystem Size Used Avail Use% Mounted on ceph-fuse 8.7T 1.6T 7.1T 18% /ceph We would like to mount cephfs with option reflecting only the pool it uses So the output on CephFS client would look like: fuse-client $ df -h Filesystem Size Used Avail Use% Mounted on ceph-fuse 3.2T 784GT 2.4T 18% /ceph We are aware that the value of "MAX AVAIL" can change depending on the data grow in other pools, replica number, etc. Having this option available can prevent multiple users from backing up large amount of data at the same time, when there is not enough of space available for all of them. "That is really no different than two different people both seeing there is 2TB free in the Filesystem and both starting a 1.5TB copy into it at the same time." Version-Release number of selected component (if applicable): ceph-base-10.2.2-38.el7cp.x86_64 Tue Jan 31 16:14:45 2017 ceph-common-10.2.2-38.el7cp.x86_64 Tue Jan 31 16:14:39 2017 ceph-deploy-1.5.33-1.el7cp.noarch Fri Jan 27 15:42:03 2017 ceph-mds-10.2.2-38.el7cp.x86_64 Fri Feb 3 15:13:58 2017 ceph-mon-10.2.2-38.el7cp.x86_64 Tue Jan 31 16:14:45 2017 ceph-osd-10.2.2-38.el7cp.x86_64 Thu Feb 2 12:04:41 2017 ceph-selinux-10.2.2-38.el7cp.x86_64 Tue Jan 31 16:14:39 2017 libcephfs1-10.2.2-38.el7cp.x86_64 Wed Jan 4 10:52:01 2017 python-cephfs-10.2.2-38.el7cp.x86_64 How reproducible: Every time Steps to Reproduce: 1. Create cephfs pool 2. mount on client 3. on client: df -h Actual results: on client output reflects global ceph df status Expected results: add option to have reflected status just for the pool that is used for cephfs Additional info:
Fix merged in: https://github.com/ceph/ceph/commit/eabe6626141df3f1b253c880aa6cb852c8b7ac1d Waiting on kernel client support.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3387