Red Hat Bugzilla – Bug 1160377
sftp is failing using wildcards and many files
Last modified: 2016-02-20 02:38:28 EST
Description of problem: When connecting via sftp to a host and running ls with wildcard (ls /some/dir/*) and directory has more than 128 file in it one gets an error Version-Release number of selected component (if applicable): openssh-6.4p1-8.el7.x86_64 openssh-server-6.4p1-8.el7.x86_64 openssh-clients-6.4p1-8.el7.x86_64 libssh2-1.4.3-8.el7.x86_64 How reproducible: Steps to Reproduce: # x="98765432109876543210987654321098765432109876543210" # dir=/var/tmp/$x # mkdir -p $dir # for n in `seq 129` > do > touch $dir/$x-$n.txt > done # # sftp localhost Connected to localhost. sftp> sftp> ls /var/tmp/98765432109876543210987654321098765432109876543210/* Can't ls: "/var/tmp/98765432109876543210987654321098765432109876543210/*" not found sftp> ls /var/tmp/98765432109876543210987654321098765432109876543210/*.txt Can't ls: "/var/tmp/98765432109876543210987654321098765432109876543210/*.txt" not found sftp> sftp> ls /var/tmp/98765432109876543210987654321098765432109876543210/ /var/tmp/98765432109876543210987654321098765432109876543210/98765432109876543210987654321098765432109876543210-1.txt /var/tmp/98765432109876543210987654321098765432109876543210/98765432109876543210987654321098765432109876543210-10.txt /var/tmp/98765432109876543210987654321098765432109876543210/98765432109876543210987654321098765432109876543210-100.txt /var/tmp/98765432109876543210987654321098765432109876543210/98765432109876543210987654321098765432109876543210-1000.txt [...] sftp> Actual results: sftp> ls /var/tmp/98765432109876543210987654321098765432109876543210/* Can't ls: "/var/tmp/98765432109876543210987654321098765432109876543210/*" not found sftp> ls /var/tmp/98765432109876543210987654321098765432109876543210/*.txt Can't ls: "/var/tmp/98765432109876543210987654321098765432109876543210/*.txt" not found sftp> Expected results: list of files. Additional info:
In this use case we are hitting the hard limit GLOB_LIMIT_STAT which is set to 128 by default. If there is wild-card used, ls needs to go and stat all the resulting files because of star expansion is done before calling ls on remote shell, which is less efficient than the another mentioned method (ls /var/tmp/987../ ). This is upstream feature. The other thing is that the result of this call is "not found" which is not self-explaining. But unfortunately sftp protocol doesn't have any better error code for exceeding this limit so it ends like this. Possible solutions: * Extend limit (works, but it is still only until someone else tries to ls some larger directory) * Try to generate better error message/code (sftp protocol problem) [1] Also related to current Fedora and upstream releases of openSSH. [1] http://winscp.net/eng/docs/sftp_codes
This bug not only affects "ls *", it also affects "get *". In the case of ls, there is a workaround noted in the previous comment. In the case of get, there is no workaround that I know of. Would it be possible to elevate the severity and priority? Also, is it necessary to have a glob limit? Please take a look at https://bugzilla.mindrot.org/show_bug.cgi?id=2395.
This bugzilla is in VERIFIED state, which means that it was successfully fixed and tested. Unfortunately it is not obvious from the public comments, since the most of the important are private. Just verified, that the fix is solving also your use case so you can expect to have it fixed with next update.
*** Bug 1264174 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-2088.html