Red Hat Bugzilla – Bug 1263765
free -h segfault
Last modified: 2017-07-27 10:32 EDT
Description of problem:
During testing of BZ#1246379, I've tried to pass huge numbers (petabytes of memory) to free utility. When trying to produce output in human readable form (-h), free segfaults.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. copy /proc/meminfo file and change "MemTotal" value to "123123123123123"
2. mount modified meminfo:
mount --bind meminfo /proc/meminfo
3. free -h
total used free shared buff/cache available
In my case the free tool starts to crash when I get over 1192928601000 kB and the probability of crashing gets higher when the value is increased even more. That means it probably depends on other values which change in time. However, it's very probably an overflow in the evaluations and needs to be fixed.
*** Bug 1362588 has been marked as a duplicate of this bug. ***
According to the comment in code made by the author, 'free -h' generally acts as described above on systems with memory exceeding 1PB.
Problem is caused by character overflow in reserved Memory info column. Solution preventing this behaviour was deferred by upstream as no computer with that much memory exists so far.
Patch under construction.
Created attachment 1305425 [details]
Upstream backport patch with additional CLI options