Bug 1160377

Summary: sftp is failing using wildcards and many files
Product: Red Hat Enterprise Linux 7 Reporter: daniel <dmoessne>
Component: opensshAssignee: Jakub Jelen <jjelen>
Status: CLOSED ERRATA QA Contact: Stanislav Zidek <szidek>
Severity: low Docs Contact:
Priority: low    
Version: 7.0CC: jjelen, lmiksik, magoldma, matt.olsen, plautrba, szidek, tmraz
Target Milestone: rcFlags: dmoessne: needinfo+
Target Release: 7.1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: openssh-6.6.1p1-22.el7 Doc Type: Bug Fix
Doc Text:
Cause: Function, preventing DoS (Denial of Service) for both server and client, for limiting number of files listed using wildcard character (*) was too low. Consequence: Users frequently hit this limit and it prevent them from listing directory with many files over sftp. Fix: Increased limit 64 times to 8192 files. Result: Users will not hit this limit so easily.
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-11-19 08:02:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1133060, 1205796    

Description daniel 2014-11-04 16:23:39 UTC
Description of problem:
When connecting via sftp to a host and running ls with wildcard (ls /some/dir/*) and directory has more than 128 file in it one gets an error


Version-Release number of selected component (if applicable):

openssh-6.4p1-8.el7.x86_64
openssh-server-6.4p1-8.el7.x86_64
openssh-clients-6.4p1-8.el7.x86_64
libssh2-1.4.3-8.el7.x86_64

How reproducible:


Steps to Reproduce:
# x="98765432109876543210987654321098765432109876543210"
# dir=/var/tmp/$x
# mkdir -p $dir
# for n in `seq 129`
> do
> touch $dir/$x-$n.txt
> done
#
# sftp localhost
Connected to localhost.
sftp>
sftp> ls /var/tmp/98765432109876543210987654321098765432109876543210/*
Can't ls: "/var/tmp/98765432109876543210987654321098765432109876543210/*" not found
sftp> ls /var/tmp/98765432109876543210987654321098765432109876543210/*.txt
Can't ls: "/var/tmp/98765432109876543210987654321098765432109876543210/*.txt" not found
sftp> 
sftp> ls /var/tmp/98765432109876543210987654321098765432109876543210/
/var/tmp/98765432109876543210987654321098765432109876543210/98765432109876543210987654321098765432109876543210-1.txt                                                                                             
/var/tmp/98765432109876543210987654321098765432109876543210/98765432109876543210987654321098765432109876543210-10.txt                                                                                                      
/var/tmp/98765432109876543210987654321098765432109876543210/98765432109876543210987654321098765432109876543210-100.txt                                                                                   
/var/tmp/98765432109876543210987654321098765432109876543210/98765432109876543210987654321098765432109876543210-1000.txt                                   
[...]
sftp>


Actual results:
sftp> ls /var/tmp/98765432109876543210987654321098765432109876543210/*
Can't ls: "/var/tmp/98765432109876543210987654321098765432109876543210/*" not found
sftp> ls /var/tmp/98765432109876543210987654321098765432109876543210/*.txt
Can't ls: "/var/tmp/98765432109876543210987654321098765432109876543210/*.txt" not found
sftp> 


Expected results:

list of files. 

Additional info:

Comment 2 Jakub Jelen 2015-01-13 14:33:59 UTC
In this use case we are hitting the hard limit GLOB_LIMIT_STAT which is set to 128 by default.
If there is wild-card used, ls needs to go and stat all the resulting files because of star expansion is done before calling ls on remote shell, which is less efficient than the another mentioned method (ls /var/tmp/987../ ).

This is upstream feature. The other thing is that the result of this call is "not found" which is not self-explaining. But unfortunately sftp protocol doesn't have any better error code for exceeding this limit so it ends like this.

Possible solutions:
 * Extend limit (works, but it is still only until someone else tries to ls some larger directory)
 * Try to generate better error message/code (sftp protocol problem) [1]

Also related to current Fedora and upstream releases of openSSH.

[1] http://winscp.net/eng/docs/sftp_codes

Comment 9 Matt Olsen 2015-06-22 16:17:30 UTC
This bug not only affects "ls *", it also affects "get *". In the case of ls, there is a workaround noted in the previous comment. In the case of get, there is no workaround that I know of. Would it be possible to elevate the severity and priority?

Also, is it necessary to have a glob limit? Please take a look at https://bugzilla.mindrot.org/show_bug.cgi?id=2395.

Comment 10 Jakub Jelen 2015-06-23 06:23:26 UTC
This bugzilla is in VERIFIED state, which means that it was successfully fixed and tested. Unfortunately it is not obvious from the public comments, since the most of the important are private.
Just verified, that the fix is solving also your use case so you can expect to have it fixed with next update.

Comment 11 Matt Goldman 2015-09-18 15:14:29 UTC
*** Bug 1264174 has been marked as a duplicate of this bug. ***

Comment 30 errata-xmlrpc 2015-11-19 08:02:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-2088.html