Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2271715

Summary: RGW. Ops Perf Counter TimeStamp bucket filtering
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Ali Maredia <amaredia>
Component: RGWAssignee: Ali Maredia <amaredia>
Status: CLOSED ERRATA QA Contact: Chaithra <ckulal>
Severity: high Docs Contact: Disha Walvekar <dwalveka>
Priority: unspecified    
Version: 7.0CC: amaredia, ceph-eng-bugs, cephqe-warriors, dparkes, dwalveka, mbenjamin, mkasturi, rpollack, tchandra, tserlin, vereddy
Target Milestone: ---   
Target Release: 7.0z2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-18.2.0-162.el9cp Doc Type: Enhancement
Doc Text:
Feature: Labeled RGW Op counters can now be removed from the output of the `ceph counter dump` command after a preset amount of time of inactivity. Reason: To enable deleted buckets to be removed from grafana panels soon after deletion instead of when the counters for the bucket are removed from the perf counters cache Result: A new config variable called "rgw_op_counters_dump_expiration". This new variable controls the number of seconds a labeled perf counter is going to be emitted from the `ceph counter dump` command. After "rgw_op_counters_dump_expiration" number of seconds, if a bucket or user labeled counter is not updated it will not show up in the json output of `ceph counter dump`. To turn this filtering out the value of "rgw_op_counters_dump_expiration" should be set to 0. Finally, the value of rgw_op_counters_dump_expiration should not be changed at runtime.
Story Points: ---
Clone Of: 2265558 Environment:
Last Closed: 2024-05-07 12:09:45 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2265558    
Bug Blocks: 2270485    

Description Ali Maredia 2024-03-27 02:41:56 UTC
+++ This bug was initially created as a clone of Bug #2265558 +++

Description of problem:

There is a requirement to keep only sending live data to Prometheus, so we have in the Prometheus databases stats with an accurate representation of the data available. For this a timestap in the cached data will be used, and after a defined period of time if the timestap hasn't been updated the data will not be sent to prometheus by the exporter. 


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Ali Maredia on 2024-02-22 19:35:53 UTC ---

This issue is being worked on upstream at: https://github.com/ceph/ceph/pull/55268

- Ali

--- Additional comment from Matt Benjamin (redhat) on 2024-03-18 15:46:49 UTC ---

Hi Ali,

What's going on with this bz?

1. this was prioritized coming out of 7.0
2. the upstream PR https://github.com/ceph/ceph/pull/55268 appears to be stale?
3. we need these fixes downstream for 7.1


Matt

--- Additional comment from Ali Maredia on 2024-03-26 08:41:20 UTC ---

The following PR has been updated (https://github.com/ceph/ceph/pull/55268/files) and the commits have been cherry-picked downstream.

- Ali

--- Additional comment from  on 2024-03-26 21:45:47 UTC ---

Builds are ready for testing. We need a qa_ack+ in order to attach this BZ to the errata advisory and move to ON_QA.

Comment 12 errata-xmlrpc 2024-05-07 12:09:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:2743