Bug 2153654

Summary: Unable to set the replica count as 1, i.e size=1 on pools
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Pawan <pdhiran>
Component: RADOSAssignee: Matan Breizman <mbreizma>
Status: CLOSED ERRATA QA Contact: Pawan <pdhiran>
Severity: high Docs Contact: Eliska <ekristov>
Priority: unspecified    
Version: 6.0CC: akupczyk, amathuri, bhubbard, ceph-eng-bugs, cephqe-warriors, choffman, ekristov, jdurgin, kdreyer, ksirivad, lflores, mbreizma, nojha, pdhange, rfriedma, rzarzyns, sseshasa, vumrao
Target Milestone: ---Flags: mbreizma: needinfo+
Target Release: 6.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-17.2.5-29.el9cp Doc Type: Bug Fix
Doc Text:
.Users are now able to set the replica `size` to `1` Previously, users were unable to set the pool `size` to `1`. The `check_pg_num()` function would incorrectly calculate the projected placement group number of the pool, which resulted in an underflow. Because of the false result, it appeared that the `pg_num` was larger than the maximum limit. With this fix, the recent `check_pg_num()` function edits are reverted and the calculation is now working properly without resulting in an underflow and the users are now able to set the replica size to `1`.
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-03-20 18:59:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2126050    

Comment 5 Josh Durgin 2022-12-15 15:57:37 UTC
The bug is due to underflow calculating the projected number of pgs - we should avoid this pg_num check at all when we're decreasing the size of a pool.

      projected += pg_num * size;
      projected -= pg_info.get_pg_num_target() * pg_info.get_size();

Comment 32 errata-xmlrpc 2023-03-20 18:59:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360