Description of problem: User getting multiple (mds.1): 3 slow requests are blocked per day. these will not clear until the mds gets manually failed. here's the pattern observed from ceph tell mds.1 dump_blocked_ops ( 7.29/1/mds.1.dump_blocked_ops.txt == Month/Day/Occurrence/mds.1.dump_blocked_ops.txt ) 7.29/1/mds.1.dump_blocked_ops.txt: "description": "client_request(client.61253774:12217861 unlink #0x100013e7b0d/krb5cc_wss_zswdll1p_823_20230729040702 2023-07-29T08:07:03.058452+0000 caller_uid=842788, caller_gid=667140{})", 7.29/1/mds.1.dump_blocked_ops.txt: "description": "client_request(mds.1:10267 rename #0x100013e7b0d/krb5cc_wss_zswdll1p_823_20230729040702 #0x60e/2000922403e caller_uid=0, caller_gid=0{})", 7.29/1/mds.1.dump_blocked_ops.txt: "description": "client_request(mds.1:10268 rename #0x100013e7b0d/krb5cc_wss_zswdll1p_823_20230729040702 #0x60e/2000922403e caller_uid=0, caller_gid=0{})", 7.29/2/mds.1.dump_blocked_ops.txt: "description": "client_request(client.61253774:12863103 unlink #0x100013e7b0d/krb5cc_wss_zswdll1p_17005_20230729173158 2023-07-29T21:31:59.137464+0000 caller_uid=842788, caller_gid=667140{})", 7.29/2/mds.1.dump_blocked_ops.txt: "description": "client_request(mds.1:48248 rename #0x100013e7b0d/krb5cc_wss_zswdll1p_17005_20230729173158 #0x612/200092a27c9 caller_uid=0, caller_gid=0{})", 7.29/2/mds.1.dump_blocked_ops.txt: "description": "client_request(mds.1:48249 rename #0x100013e7b0d/krb5cc_wss_zswdll1p_17005_20230729173158 #0x612/200092a27c9 caller_uid=0, caller_gid=0{})", Version-Release number of selected component (if applicable): RHCS 6.1 (17.2.6-70.el9cp) How reproducible: Occurring for customer 1-3 times per day. Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.
We have already requested from the customer... Can you get us an SOS report off your lead MON node and also upload all the MDS logs from every node hosting and MDS instance? Can you also list blocked ops and in flight ops and redirect that output to a file? Attach that file to the case also
Please let us know if there is anything additional you would like for us to obtain from the customer for this BZ.
So BofA is still experiencing occassional occurrences of slow/blocked ops on clusters that have been upgraded to 6.1z1. In their PVCEPH cluster they had another occurrence @ Thu Aug 17 04:50:13 EDT 2023. They provided the following files uploaded to SupportShell in case 03578367. ceph-mds.root.host3.wnboxv.log <-- mds.1 before fail @ Thu Aug 17 04:50:13 EDT 2023 ceph-mds.root.host7.oqqvka.log <-- mds.1 after fail @ Thu Aug 17 04:50:13 EDT 2023 mds.1.1692262213.failed.tar.gz <-- taken before mds.1 fail @ Thu Aug 17 04:50:13 EDT 2023 pvceph.ceph.config.dump.mds.txt In their PTCEPH cluster they are reporting 4 occurrences since 8/6/2023. Here is a snapshot of those files in SupportShell: $ yank 03590519 Authenticating the user using the OIDC device authorization grant ... The SSO authentication is successful Initializing yank for case 03590519 ... Retrieving attachments listing for case 03590519 ... | IDX | PRFX | FILENAME | SIZE (KB) | DATE | SOURCE | CACHED | |-------|--------|--------------------------------------------|-------------|----------------------|----------|----------| | 1 | 0010 | mds.1.1692238512.failed.tar.gz | 33.62 | 2023-08-17 15:22 UTC | S3 | No | | 2 | 0020 | ceph-mds.root.host4.duplag.log-20230817.gz | 12417.59 | 2023-08-17 15:22 UTC | S3 | No | | 3 | 0030 | ceph-mds.root.host0.djvost.log-20230817.gz | 149948.99 | 2023-08-17 15:22 UTC | S3 | No |
See KCS article #7031927, (https://access.redhat.com/solutions/7031927)
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:7780