https://www.in-ulm.de/~mascheck/various/argmax/
ARG_MAX, maximum length of arguments for a new process
or why do you get
command: arg list too long
2002-06-06 .. 2016-09-07 (see recent changes)
Here you’ll find
More about the nature of this limit
The effectively usable space …and a way to determine it reliably
…alternatively: about the GNU autoconf check
How to avoid the limit in a shell
other limits: number of arguments and maximum length of one argument
Actual values for ARG_MAX (with some more details in the footnotes)
More pages
More about the nature of this limit
You will see this error message, if you try to call a program with too many arguments, that is,
most likely in connection with pattern matching:
$ command *
On some systems the limit is even hit with “grep pattern /usr/include//“ (apart from that using find would be more appropriate).
It’s only the exec() system call and its direct variants, which will yield this error.
They return the corresponding error condition E2BIG (<sys/errno.h>).
The shell is not to blame, it just delivers this error to you.
In fact, shell expansion is not a problem, because here exec() is not needed, yet.
Expansion is only limited by the virtual memory system resources [1].
Thus the following commands work smoothly, because instead of handing over too many arguments to a new process,
they only make use of a shell built-in (echo) or iterate over the arguments with a control structure (for loop):
/dir-with-many-files$ echo * | wc -c
/dir-with-many-files$ for i in * ; do grep ARG_MAX "$i"; done
There are different ways to learn the upper limit
command: getconf ARG_MAX [2]
system call: sysconf(_SC_ARG_MAX) [3]
system header: ARG_MAX in e.g. <[sys/]limits.h> [4]
try xargs –show-limits[5], if you use GNU xargs
(However, on the few system that have no limit for ARG_MAX, these methods wrongly might print a limit.)
From Version 7 on the limit was defined by NCARGS (usually in <[sys/]params.h>),
Later, ARG_MAX was introduced with 4.4BSD and System V.
In contrast to the headers, sysconf and getconf tell the limit which is actually in effect.
This is relevant on systems which allow to change it at run time (AIX), by reconfiguration (UnixWare, IRIX),
by recompiling (e.g. Linux) or by applying patches (HP-UX 10) - see the end notes for more details.
(Usually these are solutions for special requirements only, because increasing the limit doesn’t solve the problem.)
[1] However, in contrast to such expansions (which includes the literal overall command line length in scripts),
shells do have a limit for the interactive command line length (that is, what you may type in after the prompt).
But this limit is shell specific and not related to ARG_MAX.
Interestingly, putenv(3) is only limited by system resources, too. You just can’t exec() anmymore if you are over the limit.
[2] 4.4BSD BSD and the successors ( NetBSD since 1.0, OpenBSD 2.0, FreeBSD 2.0 ) provide: sysctl kern.argmax.
getconf in turn was introduced on BSDs with these versions: NetBSD 1.0, OpenBSD 2.0, FreeBSD 4.8.
[3] example usage of sysconf():
#include <stdio.h>
#include <unistd.h>
int main() {
return printf(“ARG_MAX: %ld\n”, sysconf(_SC_ARG_MAX));
}
[4] A handy way to find the limits in your headers, if you have cpp(1) installed, which doesn’t abort on file not found,
(inspired by Era Eriksson’s page about ARG_MAX):
cpp <<EOF
#include <limits.h>
#include <param.h>
#include <params.h>
#include <sys/limits.h>
#include <sys/param.h>
#include <sys/params.h>
arg_max: ARG_MAX
ncargs: NCARGS
EOF
If your cpp doesn’t like non-existent files, you might try
for file in limits.h param.h params.h sys/limits.h sys/param.h sys/params.h linux/limits.h linux/limits.h linux/param.h; do
cpp <<EOF 2>&1
#include <$file>
arg_max: ARG_MAX
ncargs: NCARGS
EOF
done|egrep ‘arg_max|ncargs’ |egrep -v ‘ARG_MAX|NCARGS’
[5] $ xargs --show-limits
environment variables take up 533 bytes
POSIX upper limit on argument length (this system): 2094571
POSIX smallest allowable upper limit on argument length (all systems): 4096
Maximum length of command we could actually use: 2094038
Size of command buffer we are actually using: 131072
The effectively usable space
When looking at ARG_MAX/NCARGS, you have to consider the space comsumption by both argv[] and envp[] (arguments and environment).
Thus you have to decrease ARG_MAX at least by the results of “env|wc -c” and “env|wc -l * 4” [5] for a good estimation of the currently available space.
[5] Every entry in envp is terminated with a null byte. The env utility adds a terminating newline instead, so the result of "wc -c" is the same.
“wc -l” in turn accounts for the number of pointers in envp, i.e., usually 4 bytes each, according to sizeof().
Some modern shells allow for exporting functions to the environment. The above slightly miscalculates then,
because their definitions tend to contain newlines which are misinterpreted as new envp[].
The same applies if variable values contain newlines.
You can make wc -l ignore the wrappings and limit it to lines with = at the right place:
expr getconf ARG_MAX
- env|wc -c
- env|egrep '^[^ ]+='|wc -l
* 4
(thanks to Michael Klement for pointing out the function issue and improving the calculation)
POSIX suggests to subtract 2048 additionally so that the process may savely modify its environment. A quick estimation with the getconf command:
(all the calculations inspired by a post from Gunnar Ritter in de.comp.os.unix.shell, 3B70A6AD.3L8115910@bigfoot.de)
expr getconf ARG_MAX
- env|wc -c
- env|wc -l
* 4 - 2048
or, if you even want to consider wrapped functions or variable values [5],
expr getconf ARG_MAX
- env|wc -c
- env|egrep '^[^ ]+='|wc -l
* 4 - 2048,
…and a way to determine it reliably
The most reliable way to get the currently available space is to test the success of an exec() with increasing length of arguments until it fails.
This might be expensive, but at least you need to check only once, the length of envp[] is considered automatically, and the result is reliable.
…alternatively: about the GNU autoconf check
There’s an autoconf check “Checking for maximum length of command line arguments…”. It works quite similar.
However, it results in a much lower value (it can be a fourth of the actual value only) both by intention and for reasons of simplicity:
In a loop with increasing n, the check tries an exec() with an argument length of 2n (but won’t check for n higher than 16, that is 512kB).
The maximum is ARG_MAX/2 if ARG_MAX is a power of 2.
Finally, the found value is divided by 2 (for safety), with the reason “C++ compilers can tack on massive amounts of additional arguments”.
How to avoid the limit in a shell
If command * fails, then you can
iterate with the shell:
for i in *; do command “$i”; done (simple, completely robust and portable, may be very slow)
printf ‘%s\0’ *|xargs -0 command (works only if printf is a built-in, but then it can be much faster on high counts. thanks to Michael Klement)
iterate with find
find . -exec command {} ; (simple, completely robust and portable, may be very slow)
find . -exec command {} + (optimizes speed, quite portable)
find . -print0|xargs -0 command (optimizes speed, if find doesn’t implement “-exec +” but knows “-print0”)
find . -print|xargs command (if there’s no white space in the arguments)
Note: find descends into directories. To avoid that portably, you can use
“find . ! -name . -prune […]”
If the major part of the arguments consists of long, absolute or relative paths, then try to move your actions into the directory:
cd /directory/with/long/path; command *
And another quick fix may be to match fewer arguments:
command [a-e]; command [f-m]; …
Number of arguments and maximum length of one argument
At least on Linux 2.6, there’s also a limit on the maximum number of arguments in argv[].
On Linux 2.6.14 the function do_execve() in fs/exec.c tests if the number exceeds
PAGE_SIZE*MAX_ARG_PAGES-sizeof(void *) / sizeof(void *)
On a 32-bit Linux, this is ARGMAX/4-1 (32767). This becomes relevant if the average length of arguments is smaller than 4.
Since Linux 2.6.23, this function tests if the number exceeds MAX_ARG_STRINGS in <linux/binfmts.h> (2^32-1 = 4294967296-1).
And as additional limit since 2.6.23, one argument must not be longer than MAX_ARG_STRLEN (131072).
This might become relevant if you generate a long call like “sh -c ‘automatically generated with many arguments’”.
(pointed out by Xan Lopez and Ralf Wildenhues)
Actual values for ARG_MAX (or NCARGS)
The maximum length of arguments for a new process is varying so much among unix flavours, that I had a look at some systems:
System value getconf
available default value determined by
non-competitive: 1st edition (V1) 255+? [1stEd] experiments
non-competitive: V4, V5 and V6 512 documentation of exec(2) in V4, V6 and (no manual) sys1.c in V5
Version 7,
3 BSD,
System III, SVR1,
Ultrix 3.1 5120 NCARGS in <sys/param.h>
4.0/4.1/4.2 BSD 10240 NCARGS in <sys/param.h>
4.3 BSD / and -Tahoe 20480 NCARGS in <sys/syslimits.h>
4.3BSD-Reno, 4.3BS-Net2
4.4 BSD (alpha/lite/encumbered),
386BSD*, NetBSD 0.9,
BSD/OS 2.0 20480 ARG_MAX in <sys/syslimits.h> (NCARGS in <sys/param.h>)
POSIX/SUSv2,v3,v4 [posix] 4096 (minimum) + minimum _POSIX_ARG_MAX in <limits.h> , ARG_MAX
AIX 3.x, 4.x, 5.1[aix5] 24576 + ARG_MAX in <sys/limits.h> (NCARGS in <sys/param.h>)
AIX 6.1, 7.2 1048576 + online documentation (FilesReference/HeaderFiles) 6.1, 7.2 (ARG_MAX in <limits.h>)
BSD/OS 4.1,
NetBSD 1.0+x,
OpenBSD x: 262144 + ARG_MAX(/NCARGS) in <sys/syslimits.h>
Cygwin 1.7.7 (win 5.1) [cygwin] 30000 ARG_MAX in <limits.h>
Dynix 3.2 12288 ARG_MAX in <(sys/)limits.h> (NCARGS in <sys/param.h>)
EP/IX 2.2.1AA: 20480 ARG_MAX in <sys/limits.h>
FreeBSD 2.0-5.5 65536 + ARG_MAX(/NCARGS) in <sys/syslimits.h> [freebsd]
FreeBSD 6.0 (PowerPC 6.2, ARM 6.3) 262144 + ARG_MAX(/NCARGS) in <sys/syslimits.h> [freebsd]
GNU Hurd 0.3 Mach 1.3.99 unlimited [hurd]
(stack size?) +
Haiku OS (2008-05-14) [haiku] 131072 ? MAX_PROCESS_ARGS_SIZE in <system/user_runtime.h>
HP-UX 8(.07), 9, 10 20478 + ARG_MAX in <limits.h>
HP-UX 11.00 2048000 [hpux] + ARG_MAX in <limits.h>
Interix 3.5 1048576 + -
IRIX 4.0.5 10240 NCARGS in <sys/param.h> (fallback: ARG_MAX in <limits.h>: 5120)
IRIX 5.x, 6.x 20480 [irix] + (fallback: ARG_MAX in <limits.h>: 5120)
Linux -2.6.22 131072 + ARG_MAX in <linux/limits.h> [linux-pre-2.6.23]
Linux 2.6.23 (1/4th of stack size) + kernel code [linux-2.6.23]
MacOS X 10.6.2 (xnu 1486.2.11) 262144 + ARG_MAX(/NCARGS) in <sys/syslimits.h>
MUNIX 3.2 10240 ? ARG_MAX in <sys/syslimits.h>
Minix 3.1.1 16384 ARG_MAX in <limits.h>
OSF1/V4, V5 38912 + ARG_MAX in <sys/syslimits.h>
SCO UNIX SysV R3.2 V4.0/4.2
SCO Open Desktop R2.0/3.0 5120 ? online documentation
SCO OpenServer 5.0.x [osr5] 1048576 + (fallback: ARG_MAX in <limits.h>: 5120)
UnixWare 7.1.4,
OpenUnix 8 32768 [uw/osr6] + (fallback ARG_MAX in <limits.h>: 10240)
SCO OpenServer 6.0.0 32768 [uw/osr6] + (fallback: ARG_MAX in <limits.h>: 10240)
SINIX V5.2 10240 ? ARG_MAX in <limits.h>
SunOS 3.x 10240 ? ARG_MAX in <sys/param.h>
SunOS 4.1.4 1048576 NCARGS in <sys/param.h> , sysconf(_SC_ARG_MAX)
SunOS 5.x (32bit process) 1048320 [sunos5] + ARG_MAX in <limits.h> (NCARGS in <sys/param.h>)
SunOS 5.7+ (64bit process) 2096640 [sunos5] + ARG_MAX in <limits.h> (NCARGS in <sys/param.h>)
SVR4.0 v2.1 (386) 5120 ? (no ARG_MAX/NCARGS in in <limits.h>/<sys/param.h>)
Ultrix 4.3 (vax / mips) 10240 / 20480 NCARGS in <sys/param.h>
Unicos 9,
Unicos/mk 2 49999 + ARG_MAX in <sys/param.h>
UnixWare 7: see OpenServer 6
UWIN 4.3 AT&T Unix Services for Windows 32768 + ARG_MAX in <limits.h>
[posix] See the online documentation (please register for access) for getconf and <limits.h>.
[osr5] Bela Lubkin points out:
The limit on SCO OpenServer 5.0.x is set by ‘unsigned int maxexecargs = 1024*1024;’
in /etc/conf/pack.d/kernel/space.c. It can also be changed on a live system with the scodb
kernel debugger:
scodb -w
scodb> maxexecargs=1000000
scodb> q
(0x1000000 = 16MiB.) This is the max size of a new temporary allocation during each exec(), so it’s safe to change on the fly.
Exceeding the limit generates a kernel warning:
WARNING: table_grow - exec data table page limit of 256 pages (MAXEXECARGS) exceeded by 1 pages
WARNING: Reached MAXEXECARGS limit while adding arguments for executable “ls”
Some configure
scripts trigger this message as they deliberately probe the limit.
Raising `maxexecargs’ will not fix this as the probe will simply try harder.
[uw/osr6] The limit on UnixWare can be increased by changing the kernel parameter ARG_MAX with /etc/conf/bin/idtune,
(probably in the range up to 1MB) regenerating the kernel with “etc/conf/bin/idbuild -B” and rebooting.
See also the online documentation.
On UnixWare 7.1.4, the run time limit for a default install of “Business Edition” is 32768.
Bela Lubkin points out, that, very basically, OpenServer 6 can be described as a UnixWare 714 kernel with the OpenServer 5.0.7 userland running on top of it.
[irix] The limit on IRIX can be changed by changing the kernel parameter ncargs with systune
(in the range defined in /var/sysgen/mtune/kernel, probably varying from 64KB to 256KB),
regenerating the kernel with “autoconfig” and rebooting. See also the online documentation of systune(1M) and intro(2).
[aix5] The limit on AIX 5.1 can be changed at run time with “chdev -l sys0 -a ncargs=value”, in the range from 64KB to 10244KB.
See also the online documentation for chdev (AIX documentation, Commands reference).
[freebsd] Interesting and everything but academic was the reason for the first of two increases (40960, 65536) on FreeBSD:
“Increase ARG_MAX so that `make clean’ in src/lib/libc works again.
(Adding YP pushed it over the limit.)”
quoted from http://www.FreeBSD.org/cgi/cvsweb.cgi/src/sys/sys/syslimits.h
[linux-pre-2.6.23] On Linux, the maximum almost always has been PAGE_SIZEMAX_ARG_PAGES (409632) minus 4.
However, in Linux-0.0.1, ARG_MAX was not known yet, E2BIG not used yet and exec() returned -1 instead.
With linux-0.10 it returned ENOMEM and with Linux-0.99.8 it returned E2BIG.
ARG_MAX was introduced with linux-0.96, but it’s not used in the kernel code itself.
See do_execve() in fs/exec.c on http://www.oldlinux.org/Linux.old/.
If you want to increase the limit, you might succeed by carefully increasing MAX_ARG_PAGES (link to a discussion on the linux kernel mailing list 03/‘00)
[linux-2.6.23] With Linux 2.6.23, ARG_MAX is not hardcoded anymore. See the git entry.
It is limited to a 1/4-th of the stack size (ulimit -s), which ensures that the program still can run at all.
See also the git diff of fs/exec.c
getconf ARG_MAX might still report the former limit (being careful about applications or glibc not catching up, but especially because the kernel <limits.h> still defines it)
[sunos5] On SunOS 5.5, according to <limits.h>, ARG_MAX is 1M, decreased by the following amount:
“((sizeof(struct arg_hunk ))(0x10000/(sizeof)(struct arg_hunk)))
space for other stuff on initial stack like aux vectors, saved
registers, etc..”
On SunOS 5.9 this reads
“ARG_MAX is calculated as follows:
NCARGS - space for other stuff on initial stack
like aux vectors, saved registers, etc..”
and <sys/param.h> defines NCARGS32/64 to 0x100000/0x200000 with NCARGS being substited at compile time.
ARG_MAX is not calculated in the header files but is set directly in <limits.h>, also substitued at compile time from _ARG_MAX32/64.
SunOS 5.7 is the first release to support 64bit processes.
[hpux] HP-UX 11 can also run programs compiled on HP-UX 10. Programs which have ARG_MAX compiled in as buffer length
and copy from argv[]/envp[] without boundary checking might crash due to the increased ARG_MAX.
See devresource.hp.com
[hurd] NCARGS in contrast is arbitrarily set to INT_MAX (2147483647) in <i386-gnu/sys/param.h>
The reason: “ARG_MAX is unlimited, but we define NCARGS for BSD programs that want to compare against some fixed limit.”
I don’t know yet, if there are other limits like the stack.
[cygwin] ARG_MAX 32000 was added to <limits.h> on 2006-11-07. It’s a conservative value, having in mind the windows limit of 32k.
However, the cygwin internal limit, that is, if you don’t call non-cygwin binaries, is much higher.
[haiku] “Haiku is an open-source operating system […] inspired by the BeOS” (www.haiku-os.org). Thanks to Sylvain Kerjean for this pointer!
Note that there is also <posix/limits.h> with ARG_MAX / _POSIX_ARG_MAX for sysconf(), with more a more conservative value of 32768.
[1stEd] By judging from experiments in the simh emulator with 1st edition kernel and 2nd edition shell, the results are somewhat undefined.
If the length or number of arguments (there is no environment yet) is too high, data corruption may occur, including a kernel crash.
The following may or may not indicate the nature of limits:
From the BUGS section in the 3rd edition exec(2) manual:
Very high core and very low core are used by exec to construct the argument list for the new core image.
If the original copies of the arguments reside in these places, problems can result.
and a related information about the placement of the arguments
(which is also available in 1st ed manual) reads equivalent:
1st edition: The arguments are placed as high as possible incore: just below 60000(8).
3rd edition: The arguments are placed as high as possible in core: just below 57000(8).
By calling a script which just echoes its arguments (“sh s arguments”), I found:
- command line (script or interactive) not longer than 255 characters
- single argument not longer than 82 characters
As there is no working compiler (B) on that system, I haven’t digged further.