Description of problem: [BlueStore] RocksDB spillover HDD on compaction From one of the OSD: "bluefs": { "gift_bytes": 0, "reclaim_bytes": 0, "db_total_bytes": 32212246528, "db_used_bytes": 15892217856, "wal_total_bytes": 0, "wal_used_bytes": 0, "slow_total_bytes": 160009551872, "slow_used_bytes": 137363456, <=============== "num_files": 261, "log_bytes": 8749056, "log_compactions": 1, "logged_bytes": 369836032, "files_written_wal": 2, "files_written_sst": 607, "bytes_written_wal": 12874543619, "bytes_written_sst": 36489876338 }, Version-Release number of selected component (if applicable): Red Hat Ceph Storage 3.2.z2 ceph-osd-12.2.8-128.el7cp.x86_64
These might be related? http://tracker.ceph.com/issues/38745 [particularly noting needing twice the space during the first compaction] And also: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-February/033286.html
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. Regards, Giri
Did not Found any issues- "bluefs": { "db_total_bytes": 7681498677248, "db_used_bytes": 18612224, "wal_total_bytes": 0, "wal_used_bytes": 0, "slow_total_bytes": 0, "slow_used_bytes": 0, "num_files": 12, "log_bytes": 3014656, "log_compactions": 1, "logged_bytes": 19599360, "files_written_wal": 1, "files_written_sst": 2, "bytes_written_wal": 33300480, "bytes_written_sst": 8192, "bytes_written_slow": 0, "max_bytes_wal": 0, "max_bytes_db": 36700160, "max_bytes_slow": 0, "read_random_count": 18, "read_random_bytes": 3522, "read_random_disk_count": 0, "read_random_disk_bytes": 0, "read_random_buffer_count": 18, "read_random_buffer_bytes": 3522, "read_count": 48, "read_bytes": 165259, "read_prefetch_count": 3, "read_prefetch_bytes": 3203 }, "bluestore": { "kv_flush_lat": { "avgcount": 4777, "sum": 0.124835532, "avgtime": 0.000026132 }, "kv_commit_lat": { "avgcount": 4777, "sum": 0.903064971, "avgtime": 0.000189044 }, "kv_sync_lat": { "avgcount": 4777, "sum": 1.027900503, "avgtime": 0.000215176 }, Performed the Teuthology bluestore regression Test- 1.http://pulpito.ceph.redhat.com/skanta-2021-01-19_22:36:10-rados:singleton-bluestore-master-distro-basic-bruuni/ 2.http://pulpito.ceph.redhat.com/skanta-2021-01-19_23:44:41-rados:singleton-bluestore-master-distro-basic-bruuni/ 3.http://pulpito.ceph.redhat.com/skanta-2021-01-20_06:41:17-rados:singleton-bluestore-master-distro-basic-bruuni/z`
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294