Bug 1708711 - [BlueStore] RocksDB spillover HDD on compaction
Summary: [BlueStore] RocksDB spillover HDD on compaction
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 3.2
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: 5.0
Assignee: Neha Ojha
QA Contact: Manohar Murthy
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-05-10 16:17 UTC by Vikhyat Umrao
Modified: 2021-08-30 08:23 UTC (History)
15 users (show)

Fixed In Version: ceph-16.0.0-8633.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:22:53 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 29687 0 None closed os/bluestore: v.2 framework for more intelligent DB space usage 2021-02-08 10:51:04 UTC
Red Hat Issue Tracker RHCEPH-807 0 None None None 2021-08-19 16:43:14 UTC
Red Hat Knowledge Base (Solution) 4241061 0 Learn more None Ceph - BlueStore BlueFS Spillover Internals 2019-06-23 16:28:41 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:23:38 UTC

Description Vikhyat Umrao 2019-05-10 16:17:44 UTC
Description of problem:
[BlueStore] RocksDB spillover HDD on compaction

From one of the OSD:

  "bluefs": {
        "gift_bytes": 0,
        "reclaim_bytes": 0,
        "db_total_bytes": 32212246528,
        "db_used_bytes": 15892217856,
        "wal_total_bytes": 0,
        "wal_used_bytes": 0,
        "slow_total_bytes": 160009551872,
        "slow_used_bytes": 137363456, <===============
        "num_files": 261,
        "log_bytes": 8749056,
        "log_compactions": 1,
        "logged_bytes": 369836032,
        "files_written_wal": 2,
        "files_written_sst": 607,
        "bytes_written_wal": 12874543619,
        "bytes_written_sst": 36489876338
    },



Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 3.2.z2
ceph-osd-12.2.8-128.el7cp.x86_64

Comment 20 Trent Lloyd 2019-06-21 09:52:14 UTC
These might be related?

http://tracker.ceph.com/issues/38745 [particularly noting needing twice the space during the first compaction]
And also: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-February/033286.html

Comment 24 Giridhar Ramaraju 2019-08-05 13:08:44 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 25 Giridhar Ramaraju 2019-08-05 13:10:08 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 31 skanta 2021-02-08 18:03:39 UTC
Did not Found any issues-
 "bluefs": {
        "db_total_bytes": 7681498677248,
        "db_used_bytes": 18612224,
        "wal_total_bytes": 0,
        "wal_used_bytes": 0,
        "slow_total_bytes": 0,
        "slow_used_bytes": 0,
        "num_files": 12,
        "log_bytes": 3014656,
        "log_compactions": 1,
        "logged_bytes": 19599360,
        "files_written_wal": 1,
        "files_written_sst": 2,
        "bytes_written_wal": 33300480,
        "bytes_written_sst": 8192,
        "bytes_written_slow": 0,
        "max_bytes_wal": 0,
        "max_bytes_db": 36700160,
        "max_bytes_slow": 0,
        "read_random_count": 18,
        "read_random_bytes": 3522,
        "read_random_disk_count": 0,
        "read_random_disk_bytes": 0,
        "read_random_buffer_count": 18,
        "read_random_buffer_bytes": 3522,
        "read_count": 48,
        "read_bytes": 165259,
        "read_prefetch_count": 3,
        "read_prefetch_bytes": 3203
    },
    "bluestore": {
        "kv_flush_lat": {
            "avgcount": 4777,
            "sum": 0.124835532,
            "avgtime": 0.000026132
        },
        "kv_commit_lat": {
            "avgcount": 4777,
            "sum": 0.903064971,
            "avgtime": 0.000189044
        },
        "kv_sync_lat": {
            "avgcount": 4777,
            "sum": 1.027900503,
            "avgtime": 0.000215176
        },


Performed the Teuthology bluestore regression Test-

1.http://pulpito.ceph.redhat.com/skanta-2021-01-19_22:36:10-rados:singleton-bluestore-master-distro-basic-bruuni/
2.http://pulpito.ceph.redhat.com/skanta-2021-01-19_23:44:41-rados:singleton-bluestore-master-distro-basic-bruuni/
3.http://pulpito.ceph.redhat.com/skanta-2021-01-20_06:41:17-rados:singleton-bluestore-master-distro-basic-bruuni/z`

Comment 36 errata-xmlrpc 2021-08-30 08:22:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.