Bug 1599359 - Upgrade from ceph 1.3 to 2 can takes unnecessary hours during yum upgrade of large osd node (3h for ~2T)
Summary: Upgrade from ceph 1.3 to 2 can takes unnecessary hours during yum upgrade of ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Build
Version: 2.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z3
: 2.5
Assignee: Boris Ranto
QA Contact: subhash
Bara Ancincova
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-09 15:32 UTC by Sofer Athlan-Guyot
Modified: 2018-11-27 21:16 UTC (History)
14 users (show)

Fixed In Version: RHEL: ceph-10.2.10-36.el7cp
Doc Type: Bug Fix
Doc Text:
Previously, upgrading from Red Hat Ceph Storage 1.3 to 2 could take a significant amount of time because the fixfiles utility was restoring SELinux context of all files in the /var/lib/ceph/osd/ directory sequentially. With this update, the ceph-disk utility is used to restore the SELinux context of the files in parallel per OSD, which makes the upgrading process significantly faster on systems with multiple OSDs.
Clone Of:
Environment:
Last Closed: 2018-11-27 21:15:40 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:3689 0 None None None 2018-11-27 21:16:15 UTC

Description Sofer Athlan-Guyot 2018-07-09 15:32:57 UTC
Description of problem: as part of an upgrade of OSP9 to OSP10 we upgraded ceph from 1.3 to ~2.  During the upgrade of the osd we have found that it took 3h to upgrade.

The whole process was stuck on the upgrade of ceph-selinux.  It was running restorecon on the top level directory (~/var/lib/ceph/osd/) which:
 - adjusted the selinux;
 - I think was adjust chown permission (from root to ceph)

This is completely unnecessary as ceph-disk fix does this faster by spawning more process.

The package post script should use that tool.

In the end we end up disabling selinux (selinux=disable+reboot) to be able to upgrade ceph-selinux and run one chown for each osd disk  (~18 chown in // per server) and the process went down from 3h to ... 20min (all included).

We had more that 10 nodes to upgrade ... so 10*3h became 10*20min ... please, fix the install post script.

Comment 30 errata-xmlrpc 2018-11-27 21:15:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3689


Note You need to log in before you can comment on or make changes to this bug.