Bug 2121097

Summary: Ceph PG Autoscaler switched on for existing pools post rolling_update playbook
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Raz <rmaabari>
Component: Ceph-AnsibleAssignee: Teoman ONAY <tonay>
Status: CLOSED WORKSFORME QA Contact: Vivek Das <vdas>
Severity: low Docs Contact:
Priority: unspecified    
Version: 4.3CC: adking, aschoen, ceph-eng-bugs, gabrioux, gmeno, nthomas, tonay, vumrao
Target Milestone: ---   
Target Release: 5.3z1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-09-21 15:24:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Raz 2022-08-24 12:54:38 UTC
Description of problem:

All pools in the cluster should have their pg autoscaler states set fact by the rolling update.yml playbook, which should also disable the pools before beginning the upgrade and re-enable pg autoscaler on all pools from the start.

All of our pools had an off/warn pg autoscaler status when the rolling update playbook was executed on our environment, however it appears that the state of all cluster's pool autoscaler changed to a "on" state.

Version-Release number of selected component (if applicable):
ceph-ansible-4.0.62.8-1

How reproducible:
Using the rolling update.yml playbook, upgrade the RHCEPH version 4.2z4 to 5.0z4 cluster with pg autoscaler enabled for any existing pools in the off/warn state.

Steps to Reproduce:
1. Running RHCEPH 4.2z4
2. Verifying that the pg autoscaler's existing pools are in the off/warn state.
3. implementing the infrastructure/rolling update.yml Ansible playbook to upgrade to version 5.0z4
4. Review ceph pg autoscaler status cmd for pools autoscaler status once 

upgrading is successful (switched to "on" state).
Actual results:
Review the ceph pg autoscaler status command for pools autoscaler status once the update is successful (switched to "on" state).

Expected results:
Existing pools with an off/warn state for pg autoscaler should remain in that state after running the rolling update.yml playbook.

Additional info:
The next task in the rolling update.yml playbook might be commented as a solution to this problem.

- name: re-enable pg autoscale on pools