Bug 2130450
| Summary: | [CephFS] Clone operations are failing with Assertion Error | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Kotresh HR <khiremat> |
| Component: | CephFS | Assignee: | Kotresh HR <khiremat> |
| Status: | CLOSED ERRATA | QA Contact: | Amarnath <amk> |
| Severity: | medium | Docs Contact: | Masauso Lungu <mlungu> |
| Priority: | unspecified | ||
| Version: | 5.2 | CC: | amk, ceph-eng-bugs, cephqe-warriors, gfarnum, hyelloji, mlungu, pasik, vereddy |
| Target Milestone: | --- | ||
| Target Release: | 6.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-17.2.3-45.el9cp | Doc Type: | Bug Fix |
| Doc Text: |
.The disk full scenario does not corrupt the configuration file anymore
Previously, the configuration files were being written directly to the disk without using the temporary files, which involved truncating the existing configuration file and writing the configuration data. This led to the empty configuration files when the disk was full as the truncate was successful, however writing new configuration data failed with `no space` error. Additionally, it led to the failure of all the operations on corresponding subvolumes.
With this fix, the configuration data is written to a temporary configuration file and renamed to the original configuration file and prevents truncating the original configuration file.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-03-20 18:58:27 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2126050 | ||
|
Description
Kotresh HR
2022-09-28 07:36:47 UTC
Hi Kotresh,
Created Subovlume with fewer data.
Created more than 150 clones out of the subvolume.
Did not observe any errors
Verified in Version:
[root@ceph-amk-bz-2-8zczch-node7 ~]# ceph versions
{
"mon": {
"ceph version 17.2.5-8.el9cp (f2be93d8b38077bd58e70cf252dbbb4cf49e95e4) quincy (stable)": 3
},
"mgr": {
"ceph version 17.2.5-8.el9cp (f2be93d8b38077bd58e70cf252dbbb4cf49e95e4) quincy (stable)": 2
},
"osd": {
"ceph version 17.2.5-8.el9cp (f2be93d8b38077bd58e70cf252dbbb4cf49e95e4) quincy (stable)": 12
},
"mds": {
"ceph version 17.2.5-8.el9cp (f2be93d8b38077bd58e70cf252dbbb4cf49e95e4) quincy (stable)": 3
},
"overall": {
"ceph version 17.2.5-8.el9cp (f2be93d8b38077bd58e70cf252dbbb4cf49e95e4) quincy (stable)": 20
}
}
A detailed document with all the commands:
https://docs.google.com/document/d/1VuR2PlYrUwDWk6Aw1kKGxX18HcUZZIQn92yZgkmNhbI/edit
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:1360 |