.When using the `rolling_update.yml` playbook to upgrade to Red Hat Ceph Storage 3.0 and from version 3.0 to other zStream releases of 3.0, users who use CephFS must manually upgrade the MDS cluster
Currently the Metadata Server (MDS) cluster does not have built-in versioning or file system flags to support seamless upgrades of the MDS nodes without potentially causing assertions or other faults due to incompatible messages or other functional differences. For this reason, it's necessary during any cluster upgrade to reduce the number of active MDS nodes for a file system to one, first so that two active MDS nodes do not communicate with different versions. Further, it's also necessary to take standbys offline as any new `CompatSet` flags will propagate via the MDSMap to all MDS nodes and cause older MDS nodes to suicide.
To upgrade the MDS cluster:
. Reduce the number of ranks to 1:
ceph fs set <fs_name> max_mds 1
. Deactivate all non-zero ranks, from the highest rank to the lowest, while waiting for each MDS to finish stopping:
ceph mds deactivate <fs_name>:<n>
ceph status # wait for MDS to finish stopping
. Take all standbys offline using `systemctl`:
systemctl stop ceph-mds.target
ceph status # confirm only one MDS is online and is active
. Upgrade the single active MDS and restart daemon using `systemctl`:
systemctl restart ceph-mds.target
. Upgrade and start the standby daemons.
. Restore the previous max_mds for your cluster:
ceph fs set <fs_name> max_mds <old_max_mds>
For steps on how to upgrade the MDS cluster in a container, refer to the
https://access.redhat.com/articles/2789521[Updating Red Hat Ceph Storage deployed as a Container Image] Knowledgebase article.
Description of problem:
When MDSs are upgraded to 12.2.3+, all online MDS will suicide after the first upgraded MDS goes online.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Take RHCS3.0 cluster and upgrade an MDS to a release based on 12.2.3.
12.2.2- MDSs will suicide.
12.2.2- MDS continue functioning.
Caused by this backport: https://github.com/ceph/ceph/pull/18782
I think it's caused by
Author: Yan, Zheng <email@example.com>
Date: Wed Oct 18 20:58:15 2017 +0800
mds: don't rdlock locks in replica object while auth mds is recovering
Auth mds may take xlock on the lock and change the object when replaying
unsafe requests. To guarantee new requests and replayed unsafe requests
(on auth mds) get processed in proper order, we shouldn't rdlock locks in
replica object while auth mds of the object is recovering
Signed-off-by: "Yan, Zheng" <firstname.lastname@example.org>
(cherry picked from commit 0afbc0338e1b9f32340eaa74899d8d43ac8608fe)
The commit modified CInode::encode_replica and CInode::_encode_locks_state_for_replica
Can you please add changes made in RHEL installation guide, also to Ubuntu installation guide and Container Guide also ?
Pushing this to assigned state based on comment 34 and 35
Moving this bz to verified state, doc text for RHEL, Ubuntu and Container looks good.