Bug 1427512

Summary: [RFE] add option to be enable mount Cephfs which reflects used pool characteristics, not ceph df global
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Tomas Petr <tpetr>
Component: CephFSAssignee: Douglas Fuller <dfuller>
Status: CLOSED ERRATA QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: medium Docs Contact: Bara Ancincova <bancinco>
Priority: medium    
Version: 2.1CC: anharris, ceph-eng-bugs, ceph-qe-bugs, dfuller, edonnell, icolle, idryomov, john.spray, kdreyer, pdonnell, shmohan, tpetr, vumrao
Target Milestone: rcKeywords: FutureFeature
Target Release: 3.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-12.2.1-1.el7cp Ubuntu: ceph_12.2.1-2redhat1xenial Doc Type: Enhancement
Doc Text:
.On a CephFS with only one data pool, the `ceph df` command shows characteristics of that pool On Ceph File Systems that contain only one data pool, the `ceph df` command shows results that reflect the file storage spaces used and available in that data pool. This new functionality is available for FUSE clients only for now and will be available for kernel clients in a future release of Red{nbsp}Hat Enterprise Linux.
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-05 23:32:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1494421    

Description Tomas Petr 2017-02-28 13:08:19 UTC
Description of problem:
Currently when you have mounted the cephfs, it reflects GLOBAL statistics:
# ceph df 
GLOBAL:
    SIZE      AVAIL     RAW USED     %RAW USED
    8828G     7233G        1594G         18.06  <-------currently reflects this row
POOLS:
    NAME             ID     USED       %USED     MAX AVAIL     OBJECTS
    rbd              0        1058         0         3210G           5
    vms              3           0         0         3210G           0
    cephfs           4        784G     17.78         3210G      414252   <------------ [RFE]reflect this row
    cephfs-metadata  5      54607k         0         3210G        8659

current output:
fuse-client  $ df -h
Filesystem             Size  Used Avail Use% Mounted on
ceph-fuse              8.7T  1.6T  7.1T  18% /ceph

We would like to  mount cephfs with option reflecting only the pool it uses
So the output on CephFS client would look like:
fuse-client  $ df -h
Filesystem             Size  Used Avail Use% Mounted on
ceph-fuse              3.2T  784GT  2.4T  18% /ceph


We are aware that the value of "MAX AVAIL" can change depending on the data grow in other pools, replica number, etc.

Having this option available can prevent multiple users from backing up large amount of data at the same time, when there is not enough of space available for all of them.
"That is really no different than two different people both seeing there is 2TB free in the Filesystem  and both starting a 1.5TB copy into it at the  same time."

Version-Release number of selected component (if applicable):
ceph-base-10.2.2-38.el7cp.x86_64                            Tue Jan 31 16:14:45 2017
ceph-common-10.2.2-38.el7cp.x86_64                          Tue Jan 31 16:14:39 2017
ceph-deploy-1.5.33-1.el7cp.noarch                           Fri Jan 27 15:42:03 2017
ceph-mds-10.2.2-38.el7cp.x86_64                             Fri Feb  3 15:13:58 2017
ceph-mon-10.2.2-38.el7cp.x86_64                             Tue Jan 31 16:14:45 2017
ceph-osd-10.2.2-38.el7cp.x86_64                             Thu Feb  2 12:04:41 2017
ceph-selinux-10.2.2-38.el7cp.x86_64                         Tue Jan 31 16:14:39 2017
libcephfs1-10.2.2-38.el7cp.x86_64                           Wed Jan  4 10:52:01 2017
python-cephfs-10.2.2-38.el7cp.x86_64 

How reproducible:
Every time

Steps to Reproduce:
1. Create cephfs pool
2. mount on client
3. on client: df -h

Actual results:
on client output reflects global ceph df status

Expected results:
add option to have reflected status just for the pool that is used for cephfs

Additional info:

Comment 8 Patrick Donnelly 2017-08-08 17:37:15 UTC
Fix merged in:

https://github.com/ceph/ceph/commit/eabe6626141df3f1b253c880aa6cb852c8b7ac1d

Waiting on kernel client support.

Comment 29 errata-xmlrpc 2017-12-05 23:32:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3387