Bug 1381964

Summary: [RFE] Thinly provisioned RBD image creation/resize should check RBD pool free size
Product: Red Hat Ceph Storage Reporter: Vikhyat Umrao <vumrao>
Component: RBDAssignee: Jason Dillaman <jdillama>
Status: CLOSED NOTABUG QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 1.3.3CC: anharris, bniver, ceph-eng-bugs, dwysocha, flucifre, hnallurv, linuxkidd, mhackett, vikumar
Target Milestone: rcKeywords: FutureFeature
Target Release: 3.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-02 09:46:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1258382    

Description Vikhyat Umrao 2016-10-05 12:36:54 UTC
Description of problem:

[RFE] Thinly provisioned RBD image creation/resize should check RBD pool free size

Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 1.3
Red Hat Ceph Storage 2.0

How reproducible:
Always


Additional info:
------------------

[root@ceph4-230 ~]# ceph -v
ceph version 10.2.2-38.el7cp (119a68752a5671253f9daae3f894a90313a6b8e4)
[root@ceph4-230 ~]# 

[root@ceph4-230 ~]# ceph -s
    cluster e9b22e9e-27a2-47e6-8b64-65312a8c13c1
     health HEALTH_OK
     monmap e1: 3 mons at {ceph4-230=192.168.4.230:6789/0,ceph4-231=192.168.4.231:6789/0,ceph4-232=192.168.4.232:6789/0}
            election epoch 28, quorum 0,1,2 ceph4-230,ceph4-231,ceph4-232
      fsmap e21: 1/1/1 up {0=ceph4-231=up:active}, 2 up:standby
     osdmap e49: 6 osds: 6 up, 6 in
            flags sortbitwise
      pgmap v27771: 320 pgs, 3 pools, 2068 bytes data, 21 objects
            240 MB used, 113 GB / 113 GB avail
                 320 active+clean
  client io 202 B/s rd, 0 op/s rd, 0 op/s wr

[root@ceph4-230 ~]# ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED 
    113G      113G         240M          0.21 
POOLS:
    NAME                ID     USED     %USED     MAX AVAIL     OBJECTS 
    rbd                 0         0         0        38806M           1 
    cephfs_data         1         0         0        38806M           0 
    cephfs_metadata     2      2068         0        38806M          20 

[root@ceph4-230 ~]# rbd create rbd/testrbd -s 1T 

^^ I am able to create RBD image with 1T size and pool has only 38806M available. 

[root@ceph4-230 ~]# rbd -p rbd ls -l 
NAME     SIZE PARENT FMT PROT LOCK 
testrbd 1024G          2           

[root@ceph4-230 ~]# ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED 
    113G      113G         240M          0.21 
POOLS:
    NAME                ID     USED      %USED     MAX AVAIL     OBJECTS 
    rbd                 0      65649         0        38805M           4 
    cephfs_data         1          0         0        38805M           0 
    cephfs_metadata     2       2068         0        38805M          20 


[root@ceph4-230 ~]# rbd resize rbd/testrbd -s 2T 
Resizing image: 100% complete...done.

^^ I am able to resize to 2T but rbd pool has only 38805M max available. 

[root@ceph4-230 ~]# rbd -p rbd ls -l 
NAME     SIZE PARENT FMT PROT LOCK 
testrbd 2048G          2    
   
   
[root@ceph4-230 ~]# ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED 
    113G      113G         240M          0.21 
POOLS:
    NAME                ID     USED     %USED     MAX AVAIL     OBJECTS 
    rbd                 0      128k         0        38805M           4 
    cephfs_data         1         0         0        38805M           0 
    cephfs_metadata     2      2068         0        38805M          20

Comment 4 Brett Niver 2017-08-02 09:46:33 UTC
Closing this BZ as being counter to the purpose of a thinly provisioned storage system.

Comment 5 Vikhyat Umrao 2017-08-02 13:59:47 UTC
(In reply to Brett Niver from comment #4)
> Closing this BZ as being counter to the purpose of a thinly provisioned
> storage system.

Brett, I agree with it and same I mentioned in comment#3 but I think we should Warn(throw a warning message in the command line and ask yes or no) an admin or a user if he choosing beyond MAX AVAIL.

Comment 6 Federico Lucifredi 2017-09-06 19:02:19 UTC
Because RBD allows thin provisioning, it is trivial to overcommit storage.

I do not think this warrants a warning, as it would be an exceedingly common warning in many configurations. It is more of a rule than the exception in that case.