Bug 1856961
| Summary: | [Tool] Update the ceph-bluestore-tool for adding rescue procedure for bluefs log replay | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Neha Ojha <nojha> |
| Component: | RADOS | Assignee: | Adam Kupczyk <akupczyk> |
| Status: | CLOSED ERRATA | QA Contact: | skanta |
| Severity: | urgent | Docs Contact: | |
| Priority: | urgent | ||
| Version: | 3.3 | CC: | akupczyk, assingh, bhubbard, ceph-eng-bugs, cswanson, gsitlani, jdurgin, kdreyer, linuxkidd, mmuench, mmurthy, nojha, pdhange, pdhiran, rollercow, rzarzyns, skanta, sseshasa, tpetr, tserlin, tvainio, vereddy, vumrao, ykaul |
| Target Milestone: | --- | Keywords: | Reopened |
| Target Release: | 5.0 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-16.0.0-8633.el8cp | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1821133 | Environment: | |
| Last Closed: | 2021-08-30 08:26:18 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1821133 | ||
| Bug Blocks: | 1856960 | ||
|
Comment 1
Josh Durgin
2020-07-22 19:33:25 UTC
*** Bug 1824004 has been marked as a duplicate of this bug. *** *** Bug 1850287 has been marked as a duplicate of this bug. *** While verifying the bug notice that the following bug exists- Bug ID: https://bugzilla.redhat.com/show_bug.cgi?id=1937318 Doc reference - https://access.redhat.com/solutions/5861771 Ceph version - [ceph: root@magna045 /]# ceph -v ceph version 16.1.0-100.el8cp (fd37c928e824870f3b214b12828a3d8f9d1ebbc1) pacific (rc) [ceph: root@magna045 /]# Verified the bug by executing the following steps-
On Installer Node
1.ceph osd out osd.2
On OSD Node(Magna046):
2.podman stop 82c9f812fe5a
3.cd /var/lib/ceph/28d673ae-a8c6-11eb-8703-002590fbc342/osd.2
4. Modified unit.run file as mentioned
5. systemctl start ceph-28d673ae-a8c6-11eb-8703-002590fbc342.service
6.podman exec -it 50ba2d598490 /bin/bash
7.mkdir /tmp/bluefs_export_osd.2
8.[root@magna046 /]# ceph-bluestore-tool bluefs-export --path /var/lib/ceph/osd/ceph-2/ --out-dir /tmp/bluefs_export_osd.2
inferring bluefs devices from bluestore path
slot 1 /var/lib/ceph/osd/ceph-2/block -> /dev/mapper/ceph--7f8dc0d9--2c26--4e12--bbe8--6e33c08047ae-osd--block--b164abfc--01c0--4d89--8aaa--3e36afef6424
db/
db/000028.sst
db/000031.sst
db/000032.sst
db/CURRENT
db/IDENTITY
db/LOCK
db/MANIFEST-000034
db/OPTIONS-000032
db/OPTIONS-000037
db.slow/
db.wal/
db.wal/000035.log
sharding/
sharding/def
[root@magna046 /]#
Ceph Version:-
[ceph: root@magna045 /]# ceph -v
ceph version 16.2.0-8.el8cp (f869f8bf2b6e9c3886e94267d378de5d9d57bb61) pacific (stable)
[ceph: root@magna045 /]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294 |