Bug 1331770 - reweight-by-utilization accepts 0 and -ve values for 'max_change_osds'
Summary: reweight-by-utilization accepts 0 and -ve values for 'max_change_osds'
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 1.3.2
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: rc
: 2.1
Assignee: Sage Weil
QA Contact: shylesh
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-29 13:19 UTC by Harish NV Rao
Modified: 2017-07-30 15:10 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-22 19:25:32 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:2815 0 normal SHIPPED_LIVE Moderate: Red Hat Ceph Storage security, bug fix, and enhancement update 2017-03-22 02:06:33 UTC

Description Harish NV Rao 2016-04-29 13:19:26 UTC
Description of problem:

   reweight-by-utilization accepts 0 and -ve values for 'max_change_osds'

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Execute: sudo ceph osd test-reweight-by-utilization 101 0.5 -4
2. Execute: sudo ceph osd test-reweight-by-utilization 101 0.5 0


Actual results:
negative and zero values are accepted

Expected results:
negative and zero values should not be accepted

Additional info:

Comment 2 Harish NV Rao 2016-04-29 14:05:53 UTC
[ubuntu@magna009 ~]$ sudo ceph osd reweight-by-utilization 101 0.5 -4
moved 3 / 128 (2.34375%)
avg 18.2857
stddev 3.49343 -> 3.09377 (expected baseline 3.95897)
min osd.18 with 24 -> 21 pgs (1.3125 -> 1.14844 * mean)
max osd.17 with 12 -> 12 pgs (0.65625 -> 0.65625 * mean)

oload 101
max_change 0.5
max_change_osds -4
average 0.745115
overload 0.752567
osd.18 weight 1.000000 -> 0.859970
[ubuntu@magna009 ~]$ sudo ceph osd df
ID WEIGHT  REWEIGHT SIZE   USE    AVAIL  %USE  VAR  
 0 1.00000        0      0      0      0     0    0 
 1 0.89999        0      0      0      0     0    0 
 2 0.89999        0      0      0      0     0    0 
15 0.09000  0.52400 29985M 21113M  8871M 70.41 0.94 
16 0.09000        0      0      0      0     0    0 
17 0.03000  1.00000 29985M 18194M 11790M 60.68 0.81 
18 0.03000  0.85997 29985M 25980M  4004M 86.64 1.16 
19 0.03000  1.00000 29985M 22570M  7415M 75.27 1.01 
20 0.03000  1.00000 29985M 20944M  9040M 69.85 0.94 
21 0.03000  0.95001 29985M 23826M  6159M 79.46 1.07 
22 0.03000  1.00000 29985M 23768M  6216M 79.27 1.06 
23 0.03000        0      0      0      0     0    0 
 3 0.89999        0      0      0      0     0    0 
 4 0.89999        0      0      0      0     0    0 
 5 0.89999        0      0      0      0     0    0 
 6 0.89999        0      0      0      0     0    0 
 7 0.89999        0      0      0      0     0    0 
 8 0.89999        0      0      0      0     0    0 
 9 0.89999        0      0      0      0     0    0 
10 0.89999        0      0      0      0     0    0 
11 0.89999        0      0      0      0     0    0 
12 0.89999        0      0      0      0     0    0 
13 0.89999        0      0      0      0     0    0 
14 0.89999        0      0      0      0     0    0 
              TOTAL   204G   152G 53499M 74.51      
MIN/MAX VAR: 0/1.16  STDDEV: 7.90
[ubuntu@magna009 ~]$ sudo ceph -s
    cluster d85641ab-934d-416e-beab-ef5de52a78f4
     health HEALTH_WARN
            2 pgs degraded
            2 pgs recovering
            2 pgs stuck unclean
            recovery 768/24332 objects degraded (3.156%)
            recovery 210/24332 objects misplaced (0.863%)
            1 near full osd(s)
            too few PGs per OSD (18 < min 30)
     monmap e1: 1 mons at {magna009=10.8.128.9:6789/0}
            election epoch 1, quorum 0 magna009
     osdmap e323: 24 osds: 23 up, 7 in; 1 remapped pgs
      pgmap v19477: 64 pgs, 1 pools, 42993 MB data, 12166 objects
            152 GB used, 53499 MB / 204 GB avail
            768/24332 objects degraded (3.156%)
            210/24332 objects misplaced (0.863%)
                  61 active+clean
                   2 active+recovering+degraded
                   1 active+remapped
recovery io 29794 kB/s, 8 objects/s

Comment 3 Samuel Just 2016-05-02 21:00:13 UTC
This probably should not hold up 1.3.2 -- advisory to user would be the right thing.

Comment 6 shylesh 2016-10-20 04:36:16 UTC
[ubuntu@magna104 ~]$ sudo ceph osd reweight-by-utilization 120 0.05 -10
Error EINVAL: max_osds -10 must be positive
[ubuntu@magna104 ~]$ sudo ceph osd reweight-by-utilization 120 0.05 0
Error EINVAL: max_osds 0 must be positive
[ubuntu@magna104 ~]$ sudo ceph osd test-reweight-by-utilization 120 0.05 1.123
1.123 not valid:  1.123 not in --no-increasing
Invalid command:  unused arguments: [u'1.123']
osd test-reweight-by-utilization {<int>} {<float>} {<int>} {--no-increasing} :  dry run of reweight OSDs by utilization [overload-percentage-for-consideration, default 120]
Error EINVAL: invalid command


Now, reweight-by-utilization does validation properly

verified on 10.2.3-8.el7cp.x86_64

Comment 8 errata-xmlrpc 2016-11-22 19:25:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2815.html


Note You need to log in before you can comment on or make changes to this bug.