Bug 1381964 - [RFE] Thinly provisioned RBD image creation/resize should check RBD pool free size
Summary: [RFE] Thinly provisioned RBD image creation/resize should check RBD pool free...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RBD
Version: 1.3.3
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: 3.0
Assignee: Jason Dillaman
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks: 1258382
TreeView+ depends on / blocked
 
Reported: 2016-10-05 12:36 UTC by Vikhyat Umrao
Modified: 2019-12-16 07:00 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-02 09:46:33 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 17502 0 None None None 2016-10-05 12:54:52 UTC

Description Vikhyat Umrao 2016-10-05 12:36:54 UTC
Description of problem:

[RFE] Thinly provisioned RBD image creation/resize should check RBD pool free size

Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 1.3
Red Hat Ceph Storage 2.0

How reproducible:
Always


Additional info:
------------------

[root@ceph4-230 ~]# ceph -v
ceph version 10.2.2-38.el7cp (119a68752a5671253f9daae3f894a90313a6b8e4)
[root@ceph4-230 ~]# 

[root@ceph4-230 ~]# ceph -s
    cluster e9b22e9e-27a2-47e6-8b64-65312a8c13c1
     health HEALTH_OK
     monmap e1: 3 mons at {ceph4-230=192.168.4.230:6789/0,ceph4-231=192.168.4.231:6789/0,ceph4-232=192.168.4.232:6789/0}
            election epoch 28, quorum 0,1,2 ceph4-230,ceph4-231,ceph4-232
      fsmap e21: 1/1/1 up {0=ceph4-231=up:active}, 2 up:standby
     osdmap e49: 6 osds: 6 up, 6 in
            flags sortbitwise
      pgmap v27771: 320 pgs, 3 pools, 2068 bytes data, 21 objects
            240 MB used, 113 GB / 113 GB avail
                 320 active+clean
  client io 202 B/s rd, 0 op/s rd, 0 op/s wr

[root@ceph4-230 ~]# ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED 
    113G      113G         240M          0.21 
POOLS:
    NAME                ID     USED     %USED     MAX AVAIL     OBJECTS 
    rbd                 0         0         0        38806M           1 
    cephfs_data         1         0         0        38806M           0 
    cephfs_metadata     2      2068         0        38806M          20 

[root@ceph4-230 ~]# rbd create rbd/testrbd -s 1T 

^^ I am able to create RBD image with 1T size and pool has only 38806M available. 

[root@ceph4-230 ~]# rbd -p rbd ls -l 
NAME     SIZE PARENT FMT PROT LOCK 
testrbd 1024G          2           

[root@ceph4-230 ~]# ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED 
    113G      113G         240M          0.21 
POOLS:
    NAME                ID     USED      %USED     MAX AVAIL     OBJECTS 
    rbd                 0      65649         0        38805M           4 
    cephfs_data         1          0         0        38805M           0 
    cephfs_metadata     2       2068         0        38805M          20 


[root@ceph4-230 ~]# rbd resize rbd/testrbd -s 2T 
Resizing image: 100% complete...done.

^^ I am able to resize to 2T but rbd pool has only 38805M max available. 

[root@ceph4-230 ~]# rbd -p rbd ls -l 
NAME     SIZE PARENT FMT PROT LOCK 
testrbd 2048G          2    
   
   
[root@ceph4-230 ~]# ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED 
    113G      113G         240M          0.21 
POOLS:
    NAME                ID     USED     %USED     MAX AVAIL     OBJECTS 
    rbd                 0      128k         0        38805M           4 
    cephfs_data         1         0         0        38805M           0 
    cephfs_metadata     2      2068         0        38805M          20

Comment 4 Brett Niver 2017-08-02 09:46:33 UTC
Closing this BZ as being counter to the purpose of a thinly provisioned storage system.

Comment 5 Vikhyat Umrao 2017-08-02 13:59:47 UTC
(In reply to Brett Niver from comment #4)
> Closing this BZ as being counter to the purpose of a thinly provisioned
> storage system.

Brett, I agree with it and same I mentioned in comment#3 but I think we should Warn(throw a warning message in the command line and ask yes or no) an admin or a user if he choosing beyond MAX AVAIL.

Comment 6 Federico Lucifredi 2017-09-06 19:02:19 UTC
Because RBD allows thin provisioning, it is trivial to overcommit storage.

I do not think this warrants a warning, as it would be an exceedingly common warning in many configurations. It is more of a rule than the exception in that case.


Note You need to log in before you can comment on or make changes to this bug.