Login
Log in using an SSO provider:
Fedora Account System
Red Hat Associate
Login using a Red Hat Bugzilla account
Forgot Password
Create an Account
Red Hat Bugzilla – Bug 1831105
Home
New
Search
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh94 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
[?]
This site requires JavaScript to be enabled to function correctly, please enable it.
Bug 1831105
-
Docs: [ceph-osd] osd failed to come up(ceph_assert(ondisk_format > 0)) on rhel 7.8 rhcs4 deployment
Summary:
Docs: [ceph-osd] osd failed to come up(ceph_assert(ondisk_format > 0)) on rh...
Keywords
:
AutomationBlocker
Status
:
CLOSED NOTABUG
Alias:
None
Product:
Red Hat Ceph Storage
Classification:
Red Hat Storage
Component:
Documentation
Sub Component:
---
Administration Guide
Block Device Guide
Configurations Guide
DDF
Dashboard Guide
Default
File System Guide
Install Guide
Object Gateway Guide
Operations Guide
Release Notes
Troubleshooting Guide
Upgrade Guide
Version:
4.0
Hardware:
Unspecified
OS:
Unspecified
Priority:
high
Severity:
high
Target Milestone:
rc
Target Release
:
4.1
Assignee:
ceph-docs@redhat.com
QA Contact:
Tejas
Docs Contact:
URL:
Whiteboard:
Depends On:
1822134
Blocks:
TreeView+
depends on
/
blocked
Reported:
2020-05-04 16:16 UTC by
Andrew Schoen
Modified:
2020-05-08 15:07 UTC (
History
)
CC List:
18 users
(
show
)
agunn
akupczyk
anharris
aschoen
bhubbard
ceph-eng-bugs
ceph-qe-bugs
dzafman
hyelloji
jdurgin
kchai
kdreyer
msekleta
nojha
rzarzyns
sseshasa
tchandra
vpoliset
Fixed In Version:
Doc Type:
Known Issue
Doc Text:
Cause: Using partitions for the the --block.db and --block.wal arguments of the ceph-volume lvm create command. The db and wal options for the `lvm_volumes` config option in ceph-ansible is used to set those arguments during a deployment. Consequence: Occasionally the OSD will not start because udev resets the partitions permissions back to root:disk after creation by ceph-volume. Workaround (if any): Start the ceph-volume systemd unit manually for the failed OSD. For example, if the failed OSD has an ID of 8 the workaround would be running `systemctl start 'ceph-volume@lvm-8-*'`. If you know the failed OSDs UUID as well you can use the service command: `service ceph-volume@lvm-8-4c6ddc44-9037-477d-903c-63b5a789ade5 start`, where 4c6ddc44-9037-477d-903c-63b5a789ade5 is the UUID for osd.8. Result: Permissions on the partitions affected would be changed back to ceph:ceph and the OSD will be restarted and join the cluster.
Clone Of:
1822134
Environment:
Last Closed:
2020-05-08 15:07:52 UTC
Embargoed:
Dependent Products:
Red Hat OpenShift Container Storage
Red Hat OpenShift Data Foundation
Red Hat OpenStack
Attachments
(Terms of Use)
Note
You need to
log in
before you can comment on or make changes to this bug.