Bug 1866252

Summary: FFU 13->16.1 ceph osds are down and fail starting looking for /run/lvm/lvmetad.socket
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Francesco Pantano <fpantano>
Component: Ceph-AnsibleAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED ERRATA QA Contact: Ameena Suhani S H <amsyedha>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.3CC: aschoen, ceph-eng-bugs, gmeno, johfulto, jpretori, nthomas, tserlin, vereddy, ykaul
Target Milestone: z6   
Target Release: 3.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-ansible-3.2.48-1.el7cp Ubuntu: ceph-ansible_3.2.48-2redhat1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-08-18 18:05:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1578730    

Description Francesco Pantano 2020-08-05 08:17:55 UTC
Description of problem:

During the FFU procedure we usually run:

1. docker2podman
2. system_upgrade

The step #2 performs the upgrade of the underlying OS and then reboot the system; after the reboot is performed on an osd node, the osds are all down and fail starting with the following error:

```
-- Unit ceph-osd has begun starting up.                                                                                                                            
Aug 05 08:06:26 ceph-0 systemd[1]: Started Ceph OSD.                                                                                                                         
-- Subject: Unit ceph-osd has finished start-up                                                                                                                    
-- Defined-By: systemd                                                                                                                                                       
-- Support: https://access.redhat.com/support                                                                                                                                
--                                                                                                                                                                           
-- Unit ceph-osd has finished starting up.                                                                                                                         
--                                                                                                                                                                           
-- The start-up result is done.                                                                                                                                              
Aug 05 08:06:26 ceph-0 ceph-osd-run.sh[57671]: Error: error checking path "/run/lvm/lvmetad.socket": stat /run/lvm/lvmetad.socket: no such file or directory                 
Aug 05 08:06:26 ceph-0 systemd[1]: ceph-osd: Main process exited, code=exited, status=125/n/a                                                                      
Aug 05 08:06:26 ceph-0 systemd[1]: ceph-osd: Failed with result 'exit-code'.
```


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 10 errata-xmlrpc 2020-08-18 18:05:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 3.3 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3504