Bug 2252788 - Discrepancy in the "osd pool autoscale-status" command between normal vs json formatted outputs
Summary: Discrepancy in the "osd pool autoscale-status" command between normal vs json...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 6.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 7.1z4
Assignee: Kamoltat (Junior) Sirivadhna
QA Contact: skanta
URL:
Whiteboard:
Depends On:
Blocks: 2361441
TreeView+ depends on / blocked
 
Reported: 2023-12-04 17:15 UTC by Pawan
Modified: 2025-05-07 12:46 UTC (History)
9 users (show)

Fixed In Version: ceph-18.2.1-308.el9cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2361441 (view as bug list)
Environment:
Last Closed: 2025-05-07 12:46:47 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 65695 0 None None None 2025-03-28 21:45:43 UTC
Github ceph ceph pull 59444 0 None Merged reef: src/pybind/mgr/pg_autoscaler/module.py: fix 'pg_autoscale_mode' output 2025-03-28 21:45:43 UTC
Red Hat Issue Tracker RHCEPH-7998 0 None None None 2023-12-04 17:15:55 UTC
Red Hat Product Errata RHSA-2025:4664 0 None None None 2025-05-07 12:46:55 UTC

Description Pawan 2023-12-04 17:15:36 UTC
Description of problem:
We are observing discrepancy between the outputs of regular vs json formatted outputs for "ceph osd pool autoscale-status" command.

[ceph: root@ceph-pdhiran-jjl3rc-node1-installer /]# ceph osd pool autoscale-status
POOL                   SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET RATIO  EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE  BULK
.mgr                 448.5k                3.0        374.9G  0.0000                                  1.0       1              off        False
cephfs.cephfs.meta   20348                 3.0        374.9G  0.0000                                  4.0      16              off        False
cephfs.cephfs.data       0                 3.0        374.9G  0.0000                                  1.0      32              off        False
.rgw.root             2724                 3.0        374.9G  0.0000                                  1.0      32              off        False
default.rgw.log       3702                 3.0        374.9G  0.0000                                  1.0      32              off        False
default.rgw.control      0                 3.0        374.9G  0.0000                                  1.0      32              off        False
default.rgw.meta       382                 3.0        374.9G  0.0000                                  4.0      32              off        False
test_pool_1          639.5M                3.0        374.9G  0.0050                                  1.0      16              off        False
[ceph: root@ceph-pdhiran-jjl3rc-node1-installer /]# ceph osd pool autoscale-status -f plain
POOL                   SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET RATIO  EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE  BULK
.mgr                 448.5k                3.0        374.9G  0.0000                                  1.0       1              off        False
cephfs.cephfs.meta   20348                 3.0        374.9G  0.0000                                  4.0      16              off        False
cephfs.cephfs.data       0                 3.0        374.9G  0.0000                                  1.0      32              off        False
.rgw.root             2724                 3.0        374.9G  0.0000                                  1.0      32              off        False
default.rgw.log       3702                 3.0        374.9G  0.0000                                  1.0      32              off        False
default.rgw.control      0                 3.0        374.9G  0.0000                                  1.0      32              off        False
default.rgw.meta       382                 3.0        374.9G  0.0000                                  4.0      32              off        False
test_pool_1          639.5M                3.0        374.9G  0.0050                                  1.0      16              off        False
[ceph: root@ceph-pdhiran-jjl3rc-node1-installer /]# ceph osd pool autoscale-status -f json

[
    {
        "actual_capacity_ratio": 3.4224374124008632e-06,
        "actual_raw_used": 1377840.0,
        "bias": 1.0,
        "bulk": false,
        "capacity_ratio": 3.4224374124008632e-06,
        "crush_root_id": -1,
        "effective_target_ratio": 0.0,
        "logical_used": 459280,
        "pg_autoscale_mode": "on",
        "pg_num_final": 1,
        "pg_num_ideal": 0,
        "pg_num_target": 1,
        "pool_id": 1,
        "pool_name": ".mgr",
        "raw_used": 1377840.0,
        "raw_used_rate": 3.0,
        "subtree_capacity": 402590269440,
        "target_bytes": 0,
        "target_ratio": 0.0,
        "would_adjust": false
    },
    {
        "actual_capacity_ratio": 1.5162810587774946e-07,
        "actual_raw_used": 61044.0,
        "bias": 4.0,
        "bulk": false,
        "capacity_ratio": 1.5162810587774946e-07,
        "crush_root_id": -1,
        "effective_target_ratio": 0.0,
        "logical_used": 20348,
        "pg_autoscale_mode": "on",
        "pg_num_final": 16,
        "pg_num_ideal": 0,
        "pg_num_target": 16,
        "pool_id": 2,
        "pool_name": "cephfs.cephfs.meta",
        "raw_used": 61044.0,
        "raw_used_rate": 3.0,
        "subtree_capacity": 402590269440,
        "target_bytes": 0,
        "target_ratio": 0.0,
        "would_adjust": false
    },
    {
        "actual_capacity_ratio": 0.0,
        "actual_raw_used": 0.0,
        "bias": 1.0,
        "bulk": false,
        "capacity_ratio": 0.0,
        "crush_root_id": -1,
        "effective_target_ratio": 0.0,
        "logical_used": 0,
        "pg_autoscale_mode": "on",
        "pg_num_final": 32,
        "pg_num_ideal": 0,
        "pg_num_target": 32,
        "pool_id": 3,
        "pool_name": "cephfs.cephfs.data",
        "raw_used": 0.0,
        "raw_used_rate": 3.0,
        "subtree_capacity": 402590269440,
        "target_bytes": 0,
        "target_ratio": 0.0,
        "would_adjust": false
    },
    {
        "actual_capacity_ratio": 2.029855319495722e-08,
        "actual_raw_used": 8172.0,
        "bias": 1.0,
        "bulk": false,
        "capacity_ratio": 2.029855319495722e-08,
        "crush_root_id": -1,
        "effective_target_ratio": 0.0,
        "logical_used": 2724,
        "pg_autoscale_mode": "on",
        "pg_num_final": 32,
        "pg_num_ideal": 0,
        "pg_num_target": 32,
        "pool_id": 4,
        "pool_name": ".rgw.root",
        "raw_used": 8172.0,
        "raw_used_rate": 3.0,
        "subtree_capacity": 402590269440,
        "target_bytes": 0,
        "target_ratio": 0.0,
        "would_adjust": false
    },
    {
        "actual_capacity_ratio": 2.7586359738521156e-08,
        "actual_raw_used": 11106.0,
        "bias": 1.0,
        "bulk": false,
        "capacity_ratio": 2.7586359738521156e-08,
        "crush_root_id": -1,
        "effective_target_ratio": 0.0,
        "logical_used": 3702,
        "pg_autoscale_mode": "on",
        "pg_num_final": 32,
        "pg_num_ideal": 0,
        "pg_num_target": 32,
        "pool_id": 5,
        "pool_name": "default.rgw.log",
        "raw_used": 11106.0,
        "raw_used_rate": 3.0,
        "subtree_capacity": 402590269440,
        "target_bytes": 0,
        "target_ratio": 0.0,
        "would_adjust": false
    },
    {
        "actual_capacity_ratio": 0.0,
        "actual_raw_used": 0.0,
        "bias": 1.0,
        "bulk": false,
        "capacity_ratio": 0.0,
        "crush_root_id": -1,
        "effective_target_ratio": 0.0,
        "logical_used": 0,
        "pg_autoscale_mode": "on",
        "pg_num_final": 32,
        "pg_num_ideal": 0,
        "pg_num_target": 32,
        "pool_id": 6,
        "pool_name": "default.rgw.control",
        "raw_used": 0.0,
        "raw_used_rate": 3.0,
        "subtree_capacity": 402590269440,
        "target_bytes": 0,
        "target_ratio": 0.0,
        "would_adjust": false
    },
    {
        "actual_capacity_ratio": 2.846566564050535e-09,
        "actual_raw_used": 1146.0,
        "bias": 4.0,
        "bulk": false,
        "capacity_ratio": 2.846566564050535e-09,
        "crush_root_id": -1,
        "effective_target_ratio": 0.0,
        "logical_used": 382,
        "pg_autoscale_mode": "on",
        "pg_num_final": 32,
        "pg_num_ideal": 0,
        "pg_num_target": 32,
        "pool_id": 7,
        "pool_name": "default.rgw.meta",
        "raw_used": 1146.0,
        "raw_used_rate": 3.0,
        "subtree_capacity": 402590269440,
        "target_bytes": 0,
        "target_ratio": 0.0,
        "would_adjust": false
    },
    {
        "actual_capacity_ratio": 0.004996936033248603,
        "actual_raw_used": 2011717824.0,
        "bias": 1.0,
        "bulk": false,
        "capacity_ratio": 0.004996936033248603,
        "crush_root_id": -1,
        "effective_target_ratio": 0.0,
        "logical_used": 670572608,
        "pg_autoscale_mode": "on",
        "pg_num_final": 32,
        "pg_num_ideal": 3,
        "pg_num_target": 16,
        "pool_id": 8,
        "pool_name": "test_pool_1",
        "raw_used": 2011717824.0,
        "raw_used_rate": 3.0,
        "subtree_capacity": 402590269440,
        "target_bytes": 0,
        "target_ratio": 0.0,
        "would_adjust": false
    }
]


We can see that the autoscale status is wrongly displayed as on in json formatting, but in plain format, it says that autoscale if off.

Version-Release number of selected component (if applicable):
# ceph version
ceph version 17.2.6-161.el9cp (7037a43d8f7fa86659a0575b566ec10080df0d71) quincy (stable)

How reproducible:
Always

Steps to Reproduce:
1. Deploy 6.1z3 build and create few pools. 
2. Set the noautoscale flag on the cluster.
cmd : ceph osd pool set noautoscale
3. All the pools will have autoscale mode turned off.
4. Check the status via : "ceph osd pool autoscale-status". Observe that there is difference in outputs between different formats.

Actual results:
The values for autoscale-status different in plain vs json formats

Expected results:
The values for autoscale-status should be same in all formats

Additional info:
We are not observing this issue with 7.0 builds. Issue only in 6.1z3 latest builds

Comment 15 errata-xmlrpc 2025-05-07 12:46:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 7.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:4664


Note You need to log in before you can comment on or make changes to this bug.