Bug 1605930
| Summary: | osd failed to upgrade with "Error: No cluster conf found in /etc/ceph" | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Tiffany Nguyen <tunguyen> | ||||||||||||
| Component: | Ceph-Ansible | Assignee: | Sébastien Han <shan> | ||||||||||||
| Status: | CLOSED INSUFFICIENT_DATA | QA Contact: | ceph-qe-bugs <ceph-qe-bugs> | ||||||||||||
| Severity: | high | Docs Contact: | |||||||||||||
| Priority: | unspecified | ||||||||||||||
| Version: | 3.1 | CC: | aschoen, ceph-eng-bugs, gmeno, nthomas, sankarshan, seb, tunguyen, vakulkar | ||||||||||||
| Target Milestone: | rc | Flags: | vakulkar:
automate_bug?
|
||||||||||||
| Target Release: | 3.1 | ||||||||||||||
| Hardware: | Unspecified | ||||||||||||||
| OS: | Linux | ||||||||||||||
| Whiteboard: | |||||||||||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||||||||
| Doc Text: | Story Points: | --- | |||||||||||||
| Clone Of: | Environment: | ||||||||||||||
| Last Closed: | 2018-08-07 18:58:59 UTC | Type: | Bug | ||||||||||||
| Regression: | --- | Mount Type: | --- | ||||||||||||
| Documentation: | --- | CRM: | |||||||||||||
| Verified Versions: | Category: | --- | |||||||||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||
| Embargoed: | |||||||||||||||
| Attachments: |
|
||||||||||||||
|
Description
Tiffany Nguyen
2018-07-20 17:39:15 UTC
Created attachment 1466978 [details]
all.yml
Created attachment 1466984 [details]
osds.yml
Created attachment 1466988 [details]
hosts file
fsid info: [root@c07-h29-6018r ~]# ceph fsid 9071b1aa-c5ea-451c-b1d0-06b2298c1901 Created attachment 1469625 [details]
ansible log
Re-run rolling_upgrade.yml, attaching new ansible log. Upgrade still failing with error of pgs stuck degraded:
cluster:
id: 9071b1aa-c5ea-451c-b1d0-06b2298c1901
health: HEALTH_WARN
1012 pgs degraded
6 pgs recovering
1008 pgs recovery_wait
1012 pgs stuck degraded
1014 pgs stuck unclean
recovery 1225343/184368312 objects degraded (0.665%)
noout,noscrub,nodeep-scrub flag(s) set
What does your ceph.conf say about this fsid? |