Bug 2129414 - [cee/sd][BlueFS][RHCS 5.x] no BlueFS spillover health warning in RHCS 5.x
Summary: [cee/sd][BlueFS][RHCS 5.x] no BlueFS spillover health warning in RHCS 5.x
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 5.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: 7.0
Assignee: Adam Kupczyk
QA Contact: Pawan
Rivka Pollack
URL:
Whiteboard:
Depends On:
Blocks: 2237662 2237880 2237881
TreeView+ depends on / blocked
 
Reported: 2022-09-23 16:55 UTC by Kritik Sachdeva
Modified: 2024-04-12 04:25 UTC (History)
23 users (show)

Fixed In Version: ceph-18.2.0-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2237880 2237881 (view as bug list)
Environment:
Last Closed: 2023-12-13 15:19:29 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 50930 0 None Merged reef: os/bluestore: fix spillover alert 2023-07-07 18:15:18 UTC
Red Hat Issue Tracker RHCEPH-5338 0 None None None 2022-09-23 17:11:40 UTC
Red Hat Knowledge Base (Solution) 6977626 0 None None None 2022-09-27 08:57:09 UTC
Red Hat Product Errata RHBA-2023:7780 0 None None None 2023-12-13 15:19:44 UTC

Description Kritik Sachdeva 2022-09-23 16:55:27 UTC
Description of problem:
In RHCS 5.x there is no blueFS spillover health warning generated when the RocksDB starts consuming block (slower) device space. 

Version-Release number of selected component (if applicable): RHCS 5.0z4 and RHCS 5.2

How reproducible: Always

Steps to Reproduce:
1. Deploy a fresh RHCS 5 or upgrade a cluster from RHCS 4 to RHCS 5 with smaller block DB size (Like 10 Mib or 30 Mib)
   - For example:
~~~
service_type: osd
service_id: osd_nodeXY_paths
service_name: osd.osd_nodeXY_paths
placement:
  hosts:
  - nodeX
  - nodeY
spec:
  block_db_size: 10485760   <----
  data_devices:
    paths:
    - /dev/sdb
    - /dev/sdc
  db_devices:
    paths:
    - /dev/sdd
  filter_logic: AND
  objectstore: bluestore
~~~
2. Add some data into the cluster using RBD 
3. Collect the output of the below command and look for the "slow_used_bytes" parameter.
~~~
$ ceph daemon osd.<id> perf dump bluefs
~~~
   - If using non-colocated OSDs, then also verify using the below command and look for "SLOW" Column
~~~
$ ceph daemon osd.<id> bluefs stats    
~~~

*NOTE*: non-colocated: OSDs having DB and Data on separate devices

Actual results: No bluefs spillover health warning 

Expected results: It should show the bluefs spillover health warning


Additional info:

Tried to reproduce this issue in RHCS 4.2z4 and successfully able to get the bluefs spillover health warning

Comment 36 errata-xmlrpc 2023-12-13 15:19:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:7780

Comment 37 Red Hat Bugzilla 2024-04-12 04:25:11 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.