RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1160377 - sftp is failing using wildcards and many files
Summary: sftp is failing using wildcards and many files
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: openssh
Version: 7.0
Hardware: x86_64
OS: Linux
low
low
Target Milestone: rc
: 7.1
Assignee: Jakub Jelen
QA Contact: Stanislav Zidek
URL:
Whiteboard:
: 1264174 (view as bug list)
Depends On:
Blocks: 1133060 1205796
TreeView+ depends on / blocked
 
Reported: 2014-11-04 16:23 UTC by daniel
Modified: 2019-12-16 04:34 UTC (History)
7 users (show)

Fixed In Version: openssh-6.6.1p1-22.el7
Doc Type: Bug Fix
Doc Text:
Cause: Function, preventing DoS (Denial of Service) for both server and client, for limiting number of files listed using wildcard character (*) was too low. Consequence: Users frequently hit this limit and it prevent them from listing directory with many files over sftp. Fix: Increased limit 64 times to 8192 files. Result: Users will not hit this limit so easily.
Clone Of:
Environment:
Last Closed: 2015-11-19 08:02:17 UTC
Target Upstream Version:
Embargoed:
dmoessne: needinfo+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:2088 0 normal SHIPPED_LIVE Moderate: openssh security, bug fix, and enhancement update 2015-11-19 08:38:51 UTC

Description daniel 2014-11-04 16:23:39 UTC
Description of problem:
When connecting via sftp to a host and running ls with wildcard (ls /some/dir/*) and directory has more than 128 file in it one gets an error


Version-Release number of selected component (if applicable):

openssh-6.4p1-8.el7.x86_64
openssh-server-6.4p1-8.el7.x86_64
openssh-clients-6.4p1-8.el7.x86_64
libssh2-1.4.3-8.el7.x86_64

How reproducible:


Steps to Reproduce:
# x="98765432109876543210987654321098765432109876543210"
# dir=/var/tmp/$x
# mkdir -p $dir
# for n in `seq 129`
> do
> touch $dir/$x-$n.txt
> done
#
# sftp localhost
Connected to localhost.
sftp>
sftp> ls /var/tmp/98765432109876543210987654321098765432109876543210/*
Can't ls: "/var/tmp/98765432109876543210987654321098765432109876543210/*" not found
sftp> ls /var/tmp/98765432109876543210987654321098765432109876543210/*.txt
Can't ls: "/var/tmp/98765432109876543210987654321098765432109876543210/*.txt" not found
sftp> 
sftp> ls /var/tmp/98765432109876543210987654321098765432109876543210/
/var/tmp/98765432109876543210987654321098765432109876543210/98765432109876543210987654321098765432109876543210-1.txt                                                                                             
/var/tmp/98765432109876543210987654321098765432109876543210/98765432109876543210987654321098765432109876543210-10.txt                                                                                                      
/var/tmp/98765432109876543210987654321098765432109876543210/98765432109876543210987654321098765432109876543210-100.txt                                                                                   
/var/tmp/98765432109876543210987654321098765432109876543210/98765432109876543210987654321098765432109876543210-1000.txt                                   
[...]
sftp>


Actual results:
sftp> ls /var/tmp/98765432109876543210987654321098765432109876543210/*
Can't ls: "/var/tmp/98765432109876543210987654321098765432109876543210/*" not found
sftp> ls /var/tmp/98765432109876543210987654321098765432109876543210/*.txt
Can't ls: "/var/tmp/98765432109876543210987654321098765432109876543210/*.txt" not found
sftp> 


Expected results:

list of files. 

Additional info:

Comment 2 Jakub Jelen 2015-01-13 14:33:59 UTC
In this use case we are hitting the hard limit GLOB_LIMIT_STAT which is set to 128 by default.
If there is wild-card used, ls needs to go and stat all the resulting files because of star expansion is done before calling ls on remote shell, which is less efficient than the another mentioned method (ls /var/tmp/987../ ).

This is upstream feature. The other thing is that the result of this call is "not found" which is not self-explaining. But unfortunately sftp protocol doesn't have any better error code for exceeding this limit so it ends like this.

Possible solutions:
 * Extend limit (works, but it is still only until someone else tries to ls some larger directory)
 * Try to generate better error message/code (sftp protocol problem) [1]

Also related to current Fedora and upstream releases of openSSH.

[1] http://winscp.net/eng/docs/sftp_codes

Comment 9 Matt Olsen 2015-06-22 16:17:30 UTC
This bug not only affects "ls *", it also affects "get *". In the case of ls, there is a workaround noted in the previous comment. In the case of get, there is no workaround that I know of. Would it be possible to elevate the severity and priority?

Also, is it necessary to have a glob limit? Please take a look at https://bugzilla.mindrot.org/show_bug.cgi?id=2395.

Comment 10 Jakub Jelen 2015-06-23 06:23:26 UTC
This bugzilla is in VERIFIED state, which means that it was successfully fixed and tested. Unfortunately it is not obvious from the public comments, since the most of the important are private.
Just verified, that the fix is solving also your use case so you can expect to have it fixed with next update.

Comment 11 Matt Goldman 2015-09-18 15:14:29 UTC
*** Bug 1264174 has been marked as a duplicate of this bug. ***

Comment 30 errata-xmlrpc 2015-11-19 08:02:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-2088.html


Note You need to log in before you can comment on or make changes to this bug.