Bug 2055595 - image clone purges repetitively getting stuck until manually restarting ceph-mgr
Summary: image clone purges repetitively getting stuck until manually restarting ceph-mgr
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RBD
Version: 5.0
Hardware: All
OS: All
high
high
Target Milestone: ---
: 6.1
Assignee: Ilya Dryomov
QA Contact: Preethi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-02-17 10:24 UTC by Boaz
Modified: 2023-09-18 04:32 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-03-16 16:01:27 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-3187 0 None None None 2022-02-17 10:28:21 UTC

Internal Links: 2060371

Comment 6 Peter Lauterbach 2022-03-15 14:01:50 UTC
@idryomov please advise when this bz will be triaged and targeted to a release. It affects our scalability testing, and will cause issues at larger scale customers.
This performance and scale testing is in advance of these very large deployments.
cc: @vkolli

Comment 8 Boaz 2022-04-19 15:55:20 UTC
just FYI
in order to lessen the load on the manager, I applied the following settings after deployment:

ceph config set mgr mgr/prometheus/scrape_interval 60
ceph config set mgr mgr/prometheus/rbd_stats_pools_refresh_interval 600

but it still happened ,I'm gonna try and recreate this issue on the setup but I'm afraid I only have 1 day left on the allocation, so it might get postponed by 3 months.

Comment 19 Red Hat Bugzilla 2023-09-18 04:32:18 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.