Bug 2228635 - (mds.1): 3 slow requests are blocked
Summary: (mds.1): 3 slow requests are blocked
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 6.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 7.0
Assignee: Xiubo Li
QA Contact: Hemanth Kumar
Rivka Pollack
URL:
Whiteboard:
Depends On:
Blocks: 2233131 2237662
TreeView+ depends on / blocked
 
Reported: 2023-08-02 22:37 UTC by Scott Nipp
Modified: 2024-03-13 12:42 UTC (History)
11 users (show)

Fixed In Version: ceph-18.2.0-46.el9cp
Doc Type: Bug Fix
Doc Text:
.Deadlocks no longer occur between the unlink and reintegration requests Previously, when fixing async dirop bug, a regression was introduced by previous commits, causing deadlocks between the unlink and reintegration request. With this fix, the old commits are reverted and there is no longer a deadlock between unlink and reintegration requests.
Clone Of:
: 2233131 (view as bug list)
Environment:
Last Closed: 2023-12-13 15:21:28 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 61818 0 None None None 2023-08-21 05:00:16 UTC
Ceph Project Bug Tracker 62096 0 None None None 2023-08-03 01:07:09 UTC
Red Hat Bugzilla 2188460 0 unspecified CLOSED MDS Behind on trimming (145961/128) max_segments: 128, num_segments: 145961 2024-03-13 12:49:02 UTC
Red Hat Bugzilla 2203258 0 unspecified CLOSED MDS Behind on trimming (145961/128) max_segments: 128, num_segments: 145961 2023-08-02 22:41:53 UTC
Red Hat Issue Tracker RHCEPH-7150 0 None None None 2023-08-02 22:38:28 UTC
Red Hat Knowledge Base (Solution) 7031927 0 None None None 2023-09-07 10:30:16 UTC
Red Hat Product Errata RHBA-2023:7780 0 None None None 2023-12-13 15:21:40 UTC

Description Scott Nipp 2023-08-02 22:37:34 UTC
Description of problem:
User getting multiple (mds.1): 3 slow requests are blocked  per day. these will not clear until the mds gets manually failed.

here's the pattern observed from ceph tell mds.1 dump_blocked_ops
( 7.29/1/mds.1.dump_blocked_ops.txt  == Month/Day/Occurrence/mds.1.dump_blocked_ops.txt )

7.29/1/mds.1.dump_blocked_ops.txt:            "description": "client_request(client.61253774:12217861 unlink #0x100013e7b0d/krb5cc_wss_zswdll1p_823_20230729040702 2023-07-29T08:07:03.058452+0000 caller_uid=842788, caller_gid=667140{})",
7.29/1/mds.1.dump_blocked_ops.txt:            "description": "client_request(mds.1:10267 rename #0x100013e7b0d/krb5cc_wss_zswdll1p_823_20230729040702 #0x60e/2000922403e caller_uid=0, caller_gid=0{})",
7.29/1/mds.1.dump_blocked_ops.txt:            "description": "client_request(mds.1:10268 rename #0x100013e7b0d/krb5cc_wss_zswdll1p_823_20230729040702 #0x60e/2000922403e caller_uid=0, caller_gid=0{})",

7.29/2/mds.1.dump_blocked_ops.txt:            "description": "client_request(client.61253774:12863103 unlink #0x100013e7b0d/krb5cc_wss_zswdll1p_17005_20230729173158 2023-07-29T21:31:59.137464+0000 caller_uid=842788, caller_gid=667140{})",
7.29/2/mds.1.dump_blocked_ops.txt:            "description": "client_request(mds.1:48248 rename #0x100013e7b0d/krb5cc_wss_zswdll1p_17005_20230729173158 #0x612/200092a27c9 caller_uid=0, caller_gid=0{})",
7.29/2/mds.1.dump_blocked_ops.txt:            "description": "client_request(mds.1:48249 rename #0x100013e7b0d/krb5cc_wss_zswdll1p_17005_20230729173158 #0x612/200092a27c9 caller_uid=0, caller_gid=0{})",

Version-Release number of selected component (if applicable):
RHCS 6.1 (17.2.6-70.el9cp)

How reproducible:
Occurring for customer 1-3 times per day.

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 RHEL Program Management 2023-08-02 22:37:43 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 2 Scott Nipp 2023-08-02 22:44:27 UTC
We have already requested from the customer...

Can you get us an SOS report off your lead MON node and also upload all the MDS logs from every node hosting and MDS instance?

Can you also list blocked ops and in flight ops and redirect that output to a file? Attach that file to the case also

Comment 3 Scott Nipp 2023-08-02 22:45:26 UTC
Please let us know if there is anything additional you would like for us to obtain from the customer for this BZ.

Comment 33 Scott Nipp 2023-08-18 14:57:08 UTC
So BofA is still experiencing occassional occurrences of slow/blocked ops on clusters that have been upgraded to 6.1z1.

In their PVCEPH cluster they had another occurrence @ Thu Aug 17 04:50:13 EDT 2023.  They provided the following files uploaded to SupportShell in case 03578367.
ceph-mds.root.host3.wnboxv.log <-- mds.1 before fail @ Thu Aug 17 04:50:13 EDT 2023
ceph-mds.root.host7.oqqvka.log <-- mds.1 after fail @ Thu Aug 17 04:50:13 EDT 2023
mds.1.1692262213.failed.tar.gz <-- taken before mds.1 fail @ Thu Aug 17 04:50:13 EDT 2023
pvceph.ceph.config.dump.mds.txt

In their PTCEPH cluster they are reporting 4 occurrences since 8/6/2023.  Here is a snapshot of those files in SupportShell:
$ yank 03590519
Authenticating the user using the OIDC device authorization grant ...

The SSO authentication is successful

Initializing yank for case 03590519 ...
Retrieving attachments listing for case 03590519 ...

|   IDX |  PRFX  | FILENAME                                   |   SIZE (KB) | DATE                 | SOURCE   |   CACHED |
|-------|--------|--------------------------------------------|-------------|----------------------|----------|----------|
|     1 |  0010  | mds.1.1692238512.failed.tar.gz             |       33.62 | 2023-08-17 15:22 UTC | S3       |      No  |
|     2 |  0020  | ceph-mds.root.host4.duplag.log-20230817.gz |    12417.59 | 2023-08-17 15:22 UTC | S3       |      No  |
|     3 |  0030  | ceph-mds.root.host0.djvost.log-20230817.gz |   149948.99 | 2023-08-17 15:22 UTC | S3       |      No  |

Comment 49 Manny 2023-09-07 10:30:17 UTC
See KCS article #7031927, (https://access.redhat.com/solutions/7031927)

Comment 68 errata-xmlrpc 2023-12-13 15:21:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:7780


Note You need to log in before you can comment on or make changes to this bug.