Bug 1332513 - [RFE] rados bench : add cleanup message with time it has taken to delete the objects when cleanup start for written objects
Summary: [RFE] rados bench : add cleanup message with time it has taken to delete the ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 1.3.2
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: 2.1
Assignee: Vikhyat Umrao
QA Contact: Vasishta
Bara Ancincova
URL:
Whiteboard:
Depends On:
Blocks: 1383917
TreeView+ depends on / blocked
 
Reported: 2016-05-03 11:20 UTC by Vikhyat Umrao
Modified: 2019-11-14 07:55 UTC (History)
8 users (show)

Fixed In Version: RHEL: ceph-10.2.3-2.el7cp Ubuntu: ceph_10.2.3-3redhat1xenial
Doc Type: Enhancement
Doc Text:
."rados bench" now shows how much time it took to clean up objects With this update, the `rados bench` command output includes a line that shows how much time it took to clean up objects: ---- Clean up completed and total clean up time :8.492848 ----
Clone Of:
Environment:
Last Closed: 2016-11-22 19:25:39 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 15704 0 None None None 2016-05-03 11:23:16 UTC
Red Hat Knowledge Base (Solution) 2305991 0 None None None 2016-05-12 13:26:11 UTC
Red Hat Product Errata RHSA-2016:2815 0 normal SHIPPED_LIVE Moderate: Red Hat Ceph Storage security, bug fix, and enhancement update 2017-03-22 02:06:33 UTC

Description Vikhyat Umrao 2016-05-03 11:20:25 UTC
Description of problem:
[RFE] rados bench : add cleanup message with time it has taken to delete the objects when cleanup start for written objects 

# rados -p rbd  bench 20  write -b 8192 -t 256
 Maintaining 256 concurrent writes of 8192 bytes for up to 20 seconds or 0 objects
 Object prefix: benchmark_data_dell-per630-8.gsslab.pnq2.red_59618
   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
     0       0         0         0         0         0         -         0
     1     255     11275     11020   85.9967   86.0938 0.0209201  0.022394
     2     255     21299     21044   82.1255   78.3125 0.0160036 0.0240254
     3     256     33494     33238   86.5002   95.2656 0.0176813 0.0229041
     4     255     44851     44596   87.0498   88.7344 0.0204237 0.0228027

 ................................
 .................................

2016-05-03 16:45:28.715926min lat: 0.00156224 max lat: 7.1998 avg lat: 0.0559234
   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
    20     254     85867     85613   33.4264   6.195310.00497779 0.0559234
    21     254     85867     85613   31.8353         0         - 0.0559234
    22     254     85867     85613   30.3887         0         - 0.0559234
 Total time run:         22.334877
Total writes made:      85867
Write size:             8192
Bandwidth (MB/sec):     30.035 

Stddev Bandwidth:       34.5402
Max bandwidth (MB/sec): 95.2656
Min bandwidth (MB/sec): 0
Average Latency:        0.0664779
Stddev Latency:         0.456263
Max latency:            7.1998
Min latency:            0.00156224


- After benchmark is completed, there is a long pause as it  start deleting the written objects (clean up).

- Add cleanup message with time it has taken to delete the objects when cleanup start for written objects 

Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 1.3.2

Comment 1 Vikhyat Umrao 2016-05-03 16:48:46 UTC
Upstream Patch : https://github.com/ceph/ceph/pull/8913

Comment 3 Vikhyat Umrao 2016-06-16 04:08:50 UTC
Upstream Jewel Backport Tracker : http://tracker.ceph.com/issues/16338
Upstream Jewel Backport PR : https://github.com/ceph/ceph/pull/9740

Comment 13 Vasishta 2016-10-17 16:49:33 UTC
Working as required

$sudo rados -p 1332513  bench 10  write -b 10240
Maintaining 16 concurrent writes of 10240 bytes to objects of size 10240 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_magna111_26595
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16       372       356   3.47637   3.47656  0.00966636   0.0445187
    2      16       610       594   2.90006   2.32422  0.00239654   0.0464271
    3      16       687       671   2.18397  0.751953     0.13433   0.0692285
    4      16      1178      1162   2.83655   4.79492   0.0292036   0.0532861
    5      16      1699      1683   3.28669   5.08789   0.0465653   0.0472847
    6      16      1999      1983   3.22715   2.92969  0.00958194   0.0475815
    7      16      2311      2295   3.20134   3.04688   0.0293496   0.0479501
    8      16      2680      2664   3.25157   3.60352   0.0235217   0.0475715
    9      16      2936      2920   3.16804       2.5   0.0399677   0.0491101
   10      16      3211      3195   3.11976   2.68555  0.00850986   0.0499317
Total time run:         10.080578
Total writes made:      3211
Write size:             10240
Object size:            10240
Bandwidth (MB/sec):     3.11068
Stddev Bandwidth:       1.24251
Max bandwidth (MB/sec): 5.08789
Min bandwidth (MB/sec): 0.751953
Average IOPS:           318
Stddev IOPS:            127
Max IOPS:               521
Min IOPS:               77
Average Latency(s):     0.0502264
Stddev Latency(s):      0.0713263
Max latency(s):         0.980568
Min latency(s):         0.00174271
Cleaning up (deleting benchmark objects)
Clean up completed and total clean up time :8.492848
                                            ---------

Comment 15 errata-xmlrpc 2016-11-22 19:25:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2815.html


Note You need to log in before you can comment on or make changes to this bug.