.RocksDB compaction no longer exhausts free space of BlueFS
Previously, the balancing of free space between main storage and storage for RocksDB, managed by BlueFS, happened only when write operations were underway. This caused an `ENOSPC` error for BlueFS to be returned when RocksDB compaction was triggered right before long interval without write operations. With this update, the code has been modified to periodically check free space balance even if no write operations are ongoing so that compaction no longer exhausts free space of BlueFS.
How reproducible is this? I'm looking at the luminous code and I think the only way this would happen is if the device is fast and/or the osd is idle, but rocksdb is doing compaction. We have a configurable that gives bluefs a minimum if 1 GB of free space. I think the way to address this is to bump that to, say, 5GB or 10GB. It's hard to tell if that will do the trick, though, without being able to reproduce...
Can verify it reproduces, and then try it again with
bluestore_bluefs_min_free = 10737418240 # 10gb (current default is 1gb)
?
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2019:0911