Hide Forgot
Description of problem: Procinfo command shows wrong values for system_idle hms if system_idle is approx. more than 498days. This cpu_idle time is sum of cpu_idle time of all cpus on SMP system. So this should occur early on smp systems. This leads to buffer overflow sometimes. Version-Release number of selected component (if applicable): procinfo-18-19 How reproducible: almost all the time (once cpu_idle time goes above 498days) Steps to Reproduce: 1. Locate RHEL5.4 system, where collective cpu_idle time is more than 498days (For single cpu = 498days, For 2 cpu = 249days, For 4 cpu = 124days, For 8 cpu = 62days, For 16 cpu = 31days) 2. run procinfo command Actual results: procinfo command output gets bad values for hours, minutes, seconds. e.g. idle : 498d 11935:715877:42949680.38 Expected results: command output should have right values for hms. hours should not exceed 24, minutes and seconds should not exceed 60. Something like idle : 498d 20:17:38.27
Created attachment 478073 [details] hms conversion patch
Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: Previously, the procinfo command calculated the system idle time in a way that caused arithmetic overflows. As a consequence, procinfo displayed the system idle time incorrectly, which eventually resulted in buffer overflows. With this update, procinfo has been modified to convert variables to a larger data type before they are used in the calculation so that procinfo now always displays the system idle time correctly. Buffer overflows no longer occur under these circumstances.