Red Hat Bugzilla – Bug 227180
"Argument list too long"
Last modified: 2007-11-30 17:11:56 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:220.127.116.11) Gecko/20061221 Firefox/18.104.22.168
Description of problem:
This limit is a major headache to many that don't use command line or understand how to get xargs to work with other programs. This can affect the ability of a new user from using Linux as I have read on many lists that I came across while trying to find a way around it.
While trying to delete a large number of files this error can occur. Or if trying to process a large number of files with long file names and/or sequential files this error can be a problem.
Now Bugzilla "Bug 215510: argument list is too long" says that this is not a bug against core-utils but it is a bug or major headache with the kernel. It is an issue with usability for those that want to replace Windows. This limit is from the days of limited memory.
I ran into this issue trying to uudeview a multipart usnet post that Pan couldn't process automatically. I was required to download all the parts for a second time and then do some fancy work to get things to work.
Now with the greater amount of memory available, this isn't such a big risk to increase this limit. If it can be set to a variable that can be configure at boot time, then the option to increase this limit can be done later by the administrator.
As my experience taught me, it takes into account all the characters in the paths as well.
In my experience, I was trying to work with files that were over 60 characters in length. Even a simple grep of a large list of text files can cause this issue if you are not in the directory of the list.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Have a large list of files with long names or long paths.
2. Try to copy them or run them through another command.
3. Grep a word from a large list of *.txt files.
"Argument list is too long"
The command should have worked as expected.
This isn't a bug in the normal sense. It needs to be fixed to recognize the changes in hardware. As a boot time variable for MAX_ARG_PAGES to change it from the default, this could be changed as needed.
From the above link.
One of the advantages of using an open-source kernel is that you are able to examine exactly what it is configured to do and modify its parameters to suit the individual needs of your system. Method #4 involves manually increasing the number of pages that are allocated within the kernel for command-line arguments. If you look at the include/linux/binfmts.h file, you will find the following near the top:
* MAX_ARG_PAGES defines the number of pages allocated for arguments
* and envelope for the new program. 32 should suffice, this gives
* a maximum env+arg of 128kB w/4KB pages!
#define MAX_ARG_PAGES 32
This is entirely expected since even with MAX_ARG_PAGES set to 64, the longest possible command line I could produce would only occupy 256KB of system memory--not very much by today's system hardware standards.
#define MAX_ARG_PAGES 32 to 64
This would be nice by FC7, at least for the desktop version.
This change was proposed upstream and rejected.