This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours
Bug 676651 - procinfo command gets buffer overflow for hms calculations
procinfo command gets buffer overflow for hms calculations
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: procinfo (Show other bugs)
5.4
All Linux
urgent Severity high
: rc
: 5.7
Assigned To: Jaromír Cápík
BaseOS QE - Apps
: ZStream
Depends On:
Blocks: 769857 784372 847650 854013 862822 864489 866391 869122 871540 877308
  Show dependency treegraph
 
Reported: 2011-02-10 10:25 EST by Yogesh
Modified: 2013-10-02 07:59 EDT (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, the procinfo command calculated the system idle time in a way that caused arithmetic overflows. As a consequence, procinfo displayed the system idle time incorrectly, which eventually resulted in buffer overflows. With this update, procinfo has been modified to convert variables to a larger data type before they are used in the calculation so that procinfo now always displays the system idle time correctly. Buffer overflows no longer occur under these circumstances.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-10-02 07:59:54 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
hms conversion patch (738 bytes, patch)
2011-02-10 10:31 EST, Yogesh
no flags Details | Diff


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Legacy) 45936 None None None Never

  None (edit)
Description Yogesh 2011-02-10 10:25:54 EST
Description of problem:
Procinfo command shows wrong values for system_idle hms if system_idle is approx. more than 498days. This cpu_idle time is sum of cpu_idle time of all cpus on SMP system. So this should occur early on smp systems. This leads to buffer overflow sometimes.

Version-Release number of selected component (if applicable):
procinfo-18-19

How reproducible:
almost all the time (once cpu_idle time goes above 498days)

Steps to Reproduce:
1. Locate RHEL5.4 system, where collective cpu_idle time is more than 498days (For single cpu = 498days, For 2 cpu = 249days, For 4 cpu = 124days, For 8 cpu = 62days, For 16 cpu = 31days)

2. run procinfo command
  
Actual results:
procinfo command output gets bad values for hours, minutes, seconds. e.g.
idle  : 498d 11935:715877:42949680.38

Expected results:
command output should have right values for hms. hours should not exceed 24, minutes and seconds should not exceed 60. Something like
idle  :   498d 20:17:38.27
Comment 1 Yogesh 2011-02-10 10:31:36 EST
Created attachment 478073 [details]
hms conversion patch
Comment 20 Miroslav Svoboda 2012-01-04 12:50:53 EST
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
Previously, the procinfo command calculated the system idle time in a way that caused arithmetic overflows. As a consequence, procinfo displayed the system idle time incorrectly, which eventually resulted in buffer overflows. With this update, procinfo has been modified to convert variables to a larger data type before they are used in the calculation so that procinfo now always displays the system idle time correctly. Buffer overflows no longer occur under these circumstances.

Note You need to log in before you can comment on or make changes to this bug.