Bug 2129414

Summary: [cee/sd][BlueFS][RHCS 5.x] no BlueFS spillover health warning in RHCS 5.x
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Kritik Sachdeva <ksachdev>
Component: RADOSAssignee: Adam Kupczyk <akupczyk>
Status: CLOSED ERRATA QA Contact: Pawan <pdhiran>
Severity: medium Docs Contact: Rivka Pollack <rpollack>
Priority: medium    
Version: 5.0CC: akraj, akupczyk, amathuri, bhubbard, ceph-eng-bugs, cephqe-warriors, choffman, gjose, hakumar, hklein, kdreyer, ksirivad, lflores, lithomas, nojha, pdhange, pdhiran, rfriedma, roemerso, rzarzyns, skanta, sseshasa, vumrao
Target Milestone: ---Keywords: Regression
Target Release: 7.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-18.2.0-1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2237880 2237881 (view as bug list) Environment:
Last Closed: 2023-12-13 15:19:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2237662, 2237880, 2237881    

Description Kritik Sachdeva 2022-09-23 16:55:27 UTC
Description of problem:
In RHCS 5.x there is no blueFS spillover health warning generated when the RocksDB starts consuming block (slower) device space. 

Version-Release number of selected component (if applicable): RHCS 5.0z4 and RHCS 5.2

How reproducible: Always

Steps to Reproduce:
1. Deploy a fresh RHCS 5 or upgrade a cluster from RHCS 4 to RHCS 5 with smaller block DB size (Like 10 Mib or 30 Mib)
   - For example:
~~~
service_type: osd
service_id: osd_nodeXY_paths
service_name: osd.osd_nodeXY_paths
placement:
  hosts:
  - nodeX
  - nodeY
spec:
  block_db_size: 10485760   <----
  data_devices:
    paths:
    - /dev/sdb
    - /dev/sdc
  db_devices:
    paths:
    - /dev/sdd
  filter_logic: AND
  objectstore: bluestore
~~~
2. Add some data into the cluster using RBD 
3. Collect the output of the below command and look for the "slow_used_bytes" parameter.
~~~
$ ceph daemon osd.<id> perf dump bluefs
~~~
   - If using non-colocated OSDs, then also verify using the below command and look for "SLOW" Column
~~~
$ ceph daemon osd.<id> bluefs stats    
~~~

*NOTE*: non-colocated: OSDs having DB and Data on separate devices

Actual results: No bluefs spillover health warning 

Expected results: It should show the bluefs spillover health warning


Additional info:

Tried to reproduce this issue in RHCS 4.2z4 and successfully able to get the bluefs spillover health warning

Comment 36 errata-xmlrpc 2023-12-13 15:19:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:7780

Comment 37 Red Hat Bugzilla 2024-04-12 04:25:11 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days