Hi , I have the product running on Linux 6.0 (egcs 2.91.66) . In this product there is necessity to pre-allocate the file used for storing. This preallocation is achieved by putting 0 (zero's) in the file till the desired size of the file . The file is then used to store structures(objects or records) . If I pre-allocate the file the application which puts objects in the file becomes slow as compared to wrting in a non pre-allocated file. I have attached a tar(compressed test file) that shows this degradation in performance . The tar file contains c program file create.c which when compiled and run with -i(preallocate) option takes more time than without -i option . I have put time in the file so just running runtest script . I have a work around that I have done in the testcase file recreate.c . Instead of filling the whole file with 0 (zero's) I jump to the desired size of the file offset and put 0 (zero's). This shows no degradation. But I was not able to find the reason behind degradation. Moreover even after this work around the test case takes twice the time it takes on Solaris or Aix machines . Can I have the ext2fs tunning paramaters which will improve the performance. Thanks , Vaibhav Nalawade .
Created attachment 11322 [details] tar file
If you seek to a specific place, you create "holes" in the file that don't have to be written, so that should be faster than actually writing the file to disc. Seems to be all fine for me.