Bug 2207491
| Summary: | mds: do not take the ino which has been used | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Xiubo Li <xiubli> |
| Component: | CephFS | Assignee: | Xiubo Li <xiubli> |
| Status: | CLOSED ERRATA | QA Contact: | Amarnath <amk> |
| Severity: | high | Docs Contact: | Akash Raj <akraj> |
| Priority: | unspecified | ||
| Version: | 5.3 | CC: | akraj, amk, bkunal, ceph-eng-bugs, cephqe-warriors, gfarnum, mchangir, tserlin, vereddy, vshankar |
| Target Milestone: | --- | Flags: | amk:
needinfo?
|
| Target Release: | 5.3z4 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-16.2.10-183.el8cp | Doc Type: | Bug Fix |
| Doc Text: |
.Allocating inodes no longer fails
Previously, when replaying the journals, if the `inodetable` or the `sessionmap`
versions did not match with the ones in the MDS caches, the corresponding `CInodes` would be added to the _inode_map_, but without removing the `ino#` from the `inodetable` or the `sessions` prealloc inos list. This would cause the allocation of `CInode` to fail as it was already added in the inode map.
With this fix, when allocating new `CInode`, the corresponding `ino#` is skipped.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-07-19 16:19:11 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2210690 | ||
|
Description
Xiubo Li
2023-05-16 06:21:56 UTC
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity. *** Bug 2189134 has been marked as a duplicate of this bug. *** Hi Xiubo Thanks, Xiubo for confirming. One small query, why was node6 mds is not becoming active in this case? Am I restarting the service with fewer interval Regards, Amarnath (In reply to Amarnath from comment #21) > Hi Xiubo > > Thanks, Xiubo for confirming. > > One small query, why was node6 mds is not becoming active in this case? > > Am I restarting the service with fewer interval > I am not sure, because from the mds logs I got nothing about this. It maybe the container itself's issue. Thanks - Xiubo Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.3 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:4213 |