Description of problem: All pools in the cluster should have their pg autoscaler states set fact by the rolling update.yml playbook, which should also disable the pools before beginning the upgrade and re-enable pg autoscaler on all pools from the start. All of our pools had an off/warn pg autoscaler status when the rolling update playbook was executed on our environment, however it appears that the state of all cluster's pool autoscaler changed to a "on" state. Version-Release number of selected component (if applicable): ceph-ansible-4.0.62.8-1 How reproducible: Using the rolling update.yml playbook, upgrade the RHCEPH version 4.2z4 to 5.0z4 cluster with pg autoscaler enabled for any existing pools in the off/warn state. Steps to Reproduce: 1. Running RHCEPH 4.2z4 2. Verifying that the pg autoscaler's existing pools are in the off/warn state. 3. implementing the infrastructure/rolling update.yml Ansible playbook to upgrade to version 5.0z4 4. Review ceph pg autoscaler status cmd for pools autoscaler status once upgrading is successful (switched to "on" state). Actual results: Review the ceph pg autoscaler status command for pools autoscaler status once the update is successful (switched to "on" state). Expected results: Existing pools with an off/warn state for pg autoscaler should remain in that state after running the rolling update.yml playbook. Additional info: The next task in the rolling update.yml playbook might be commented as a solution to this problem. - name: re-enable pg autoscale on pools