Bug 2172791

Summary: mds: make num_fwd and num_retry to __u32
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Xiubo Li <xiubli>
Component: CephFSAssignee: Xiubo Li <xiubli>
Status: CLOSED ERRATA QA Contact: Hemanth Kumar <hyelloji>
Severity: medium Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 6.0CC: akraj, ceph-eng-bugs, cephqe-warriors, gfarnum, tserlin, vdas, vshankar
Target Milestone: ---   
Target Release: 6.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-17.2.6-11.el9cp Doc Type: Bug Fix
Doc Text:
.Client requests no longer bounce indefinitely between MDS and clients Previously, there was a mismatch between the Ceph protocols for client requests between CephFS client and MDS. Due to this, the corresponding information would be truncated or lost when communicating between CephFS clients and MDS, and the client requests would indefinitely bounce between MDS and clients. With this fix, the type of the corresponding members in the protocol for the client requests is corrected by making them the same type and the new code is made to be compatible with the old Cephs. The client request does not bounce between MDS and clients indefinitely, and stops after being well retried.
Story Points: ---
Clone Of:
: 2172808 (view as bug list) Environment:
Last Closed: 2023-06-15 09:16:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2172808, 2192813    

Description Xiubo Li 2023-02-23 06:07:02 UTC
Description of problem:

The num_fwd in MClientRequestForward is int32_t, while the num_fwd
in ceph_mds_request_head is __u8. This is buggy when the num_fwd
is larger than 256 it will always be truncate to 0 again. But the
client couldn't recoginize this.

Comment 1 RHEL Program Management 2023-02-23 06:07:12 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 13 errata-xmlrpc 2023-06-15 09:16:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:3623