Bug 1427512 - [RFE] add option to be enable mount Cephfs which reflects used pool characteristics, not ceph df global
Summary: [RFE] add option to be enable mount Cephfs which reflects used pool character...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 2.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 3.0
Assignee: Douglas Fuller
QA Contact: ceph-qe-bugs
Bara Ancincova
URL:
Whiteboard:
Depends On:
Blocks: 1494421
TreeView+ depends on / blocked
 
Reported: 2017-02-28 13:08 UTC by Tomas Petr
Modified: 2020-05-14 15:41 UTC (History)
13 users (show)

Fixed In Version: RHEL: ceph-12.2.1-1.el7cp Ubuntu: ceph_12.2.1-2redhat1xenial
Doc Type: Enhancement
Doc Text:
.On a CephFS with only one data pool, the `ceph df` command shows characteristics of that pool On Ceph File Systems that contain only one data pool, the `ceph df` command shows results that reflect the file storage spaces used and available in that data pool. This new functionality is available for FUSE clients only for now and will be available for kernel clients in a future release of Red{nbsp}Hat Enterprise Linux.
Clone Of:
Environment:
Last Closed: 2017-12-05 23:32:37 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 19109 0 None None None 2017-02-28 13:40:16 UTC
Red Hat Bugzilla 1494987 0 low CLOSED [CephFS]: Discrepancies b/w "df " output of kernel mount and fuse mount 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHBA-2017:3387 0 normal SHIPPED_LIVE Red Hat Ceph Storage 3.0 bug fix and enhancement update 2017-12-06 03:03:45 UTC

Internal Links: 1494987

Description Tomas Petr 2017-02-28 13:08:19 UTC
Description of problem:
Currently when you have mounted the cephfs, it reflects GLOBAL statistics:
# ceph df 
GLOBAL:
    SIZE      AVAIL     RAW USED     %RAW USED
    8828G     7233G        1594G         18.06  <-------currently reflects this row
POOLS:
    NAME             ID     USED       %USED     MAX AVAIL     OBJECTS
    rbd              0        1058         0         3210G           5
    vms              3           0         0         3210G           0
    cephfs           4        784G     17.78         3210G      414252   <------------ [RFE]reflect this row
    cephfs-metadata  5      54607k         0         3210G        8659

current output:
fuse-client  $ df -h
Filesystem             Size  Used Avail Use% Mounted on
ceph-fuse              8.7T  1.6T  7.1T  18% /ceph

We would like to  mount cephfs with option reflecting only the pool it uses
So the output on CephFS client would look like:
fuse-client  $ df -h
Filesystem             Size  Used Avail Use% Mounted on
ceph-fuse              3.2T  784GT  2.4T  18% /ceph


We are aware that the value of "MAX AVAIL" can change depending on the data grow in other pools, replica number, etc.

Having this option available can prevent multiple users from backing up large amount of data at the same time, when there is not enough of space available for all of them.
"That is really no different than two different people both seeing there is 2TB free in the Filesystem  and both starting a 1.5TB copy into it at the  same time."

Version-Release number of selected component (if applicable):
ceph-base-10.2.2-38.el7cp.x86_64                            Tue Jan 31 16:14:45 2017
ceph-common-10.2.2-38.el7cp.x86_64                          Tue Jan 31 16:14:39 2017
ceph-deploy-1.5.33-1.el7cp.noarch                           Fri Jan 27 15:42:03 2017
ceph-mds-10.2.2-38.el7cp.x86_64                             Fri Feb  3 15:13:58 2017
ceph-mon-10.2.2-38.el7cp.x86_64                             Tue Jan 31 16:14:45 2017
ceph-osd-10.2.2-38.el7cp.x86_64                             Thu Feb  2 12:04:41 2017
ceph-selinux-10.2.2-38.el7cp.x86_64                         Tue Jan 31 16:14:39 2017
libcephfs1-10.2.2-38.el7cp.x86_64                           Wed Jan  4 10:52:01 2017
python-cephfs-10.2.2-38.el7cp.x86_64 

How reproducible:
Every time

Steps to Reproduce:
1. Create cephfs pool
2. mount on client
3. on client: df -h

Actual results:
on client output reflects global ceph df status

Expected results:
add option to have reflected status just for the pool that is used for cephfs

Additional info:

Comment 8 Patrick Donnelly 2017-08-08 17:37:15 UTC
Fix merged in:

https://github.com/ceph/ceph/commit/eabe6626141df3f1b253c880aa6cb852c8b7ac1d

Waiting on kernel client support.

Comment 29 errata-xmlrpc 2017-12-05 23:32:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3387


Note You need to log in before you can comment on or make changes to this bug.