Bug 2207491 - mds: do not take the ino which has been used [NEEDINFO]
Summary: mds: do not take the ino which has been used
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.3
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.3z4
Assignee: Xiubo Li
QA Contact: Amarnath
Akash Raj
URL:
Whiteboard:
: 2189134 (view as bug list)
Depends On:
Blocks: 2210690
TreeView+ depends on / blocked
 
Reported: 2023-05-16 06:21 UTC by Xiubo Li
Modified: 2023-07-19 16:20 UTC (History)
10 users (show)

Fixed In Version: ceph-16.2.10-183.el8cp
Doc Type: Bug Fix
Doc Text:
.Allocating inodes no longer fails Previously, when replaying the journals, if the `inodetable` or the `sessionmap` versions did not match with the ones in the MDS caches, the corresponding `CInodes` would be added to the _inode_map_, but without removing the `ino#` from the `inodetable` or the `sessions` prealloc inos list. This would cause the allocation of `CInode` to fail as it was already added in the inode map. With this fix, when allocating new `CInode`, the corresponding `ino#` is skipped.
Clone Of:
Environment:
Last Closed: 2023-07-19 16:19:11 UTC
Embargoed:
amk: needinfo?


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 52280 0 None None None 2023-05-16 06:24:22 UTC
Red Hat Issue Tracker RHCEPH-6668 0 None None None 2023-05-16 06:22:29 UTC
Red Hat Product Errata RHBA-2023:4213 0 None None None 2023-07-19 16:20:02 UTC

Description Xiubo Li 2023-05-16 06:21:56 UTC
ceph upstream tracker: https://tracker.ceph.com/issues/52280

Comment 1 RHEL Program Management 2023-05-16 06:22:05 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 4 Greg Farnum 2023-06-13 06:53:12 UTC
*** Bug 2189134 has been marked as a duplicate of this bug. ***

Comment 21 Amarnath 2023-06-27 08:47:45 UTC
Hi Xiubo

Thanks, Xiubo for confirming.

One small query, why was node6 mds is not becoming active in this case?

Am I restarting the service with fewer interval

Regards,
Amarnath

Comment 22 Xiubo Li 2023-06-27 10:18:13 UTC
(In reply to Amarnath from comment #21)
> Hi Xiubo
> 
> Thanks, Xiubo for confirming.
> 
> One small query, why was node6 mds is not becoming active in this case?
> 
> Am I restarting the service with fewer interval
> 

I am not sure, because from the mds logs I got nothing about this. It maybe the container itself's issue.

Thanks
- Xiubo

Comment 24 errata-xmlrpc 2023-07-19 16:19:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.3 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:4213


Note You need to log in before you can comment on or make changes to this bug.