Bug 1506395 - curl: (35) SSL received a record that exceeded the maximum permissible length.
Summary: curl: (35) SSL received a record that exceeded the maximum permissible length.
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: curl
Version: 26
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: Kamil Dudka
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-10-25 21:29 UTC by Rares Vernica
Modified: 2018-02-22 20:40 UTC (History)
5 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-10-27 10:51:43 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description Rares Vernica 2017-10-25 21:29:30 UTC
Description of problem:

SSL layer in curl or one of its dependencies seems broken. wget works fine.


Version-Release number of selected component (if applicable):

curl 7.53.1 (x86_64-redhat-linux-gnu) libcurl/7.53.1 NSS/3.32.1 zlib/1.2.11 libidn2/2.0.4 libpsl/0.18.0 (+libidn2/2.0.3) libssh2/1.8.0 nghttp2/1.21.1

Linux 4.13.5-200.fc26.x86_64 #1 SMP Thu Oct 5 16:53:13 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux


How reproducible:

Always


Steps to Reproduce:

curl https://google.com


Actual results:

> curl -v https://google.com
* Rebuilt URL to: https://google.com/
*   Trying 15.78.57.10...
* TCP_NODELAY set
* Connected to ...proxy...
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: none
  CApath: none
* loaded libnssckbi.so
* NSS error -12263 (SSL_ERROR_RX_RECORD_TOO_LONG)
* SSL received a record that exceeded the maximum permissible length.
* Closing connection 0
curl: (35) SSL received a record that exceeded the maximum permissible length.


Expected results:

wget works fine:

> wget https://google.com
--2017-10-25 14:22:50--  https://google.com/
Resolving ...proxy...
Connecting to ...proxy... connected.
Proxy request sent, awaiting response... 301 Moved Permanently
Location: https://www.google.com/ [following]
--2017-10-25 14:22:50--  https://www.google.com/
Connecting to ...proxy... connected.
Proxy request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html’

index.html                             [ <=>    ]  11.35K  --.-KB/s    in 0s      

2017-10-25 14:22:51 (48.3 MB/s) - ‘index.html’ saved [11619]


HTTP works fine:

> curl http://google.com
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>


Additional info:

I am using a web proxy.

Comment 1 Kamil Dudka 2017-10-26 07:00:42 UTC
(In reply to Rares Vernica from comment #0)
> SSL layer in curl or one of its dependencies seems broken.

Unlikely.  If curl was not able to initiate TLS connections to google.com, people would notice it very quickly ;-)

> I am using a web proxy.

That is apparently the cause of your troubles.  What kind of web proxy are you using?

How did you configure your environment for curl/wget to use that proxy?

You either need to give us access to that proxy, or provide some steps how to configure the proxy locally.  Otherwise it is nearly impossible for us to help you.

Comment 2 Rares Vernica 2017-10-26 22:06:05 UTC
I agree with you that it is unlikely. Maybe it it a combination of TLS being used with a Web Proxy.

The proxy is a corporate proxy and I don't know how it is configured. I would say that it is unlikely for the proxy to cause this as it works fine with wget and various web browsers for the entire site.

The proxy is set in the environment variable https_proxy. http_proxy is set as well. Both variables use the same server and the same port. The only difference between then is https://... vs http://...

There are no other local configurations done to wget or curl that I know of. In the /etc/wgetrc file all the lines are commented out. I could not find any such file for curl. Let me know what other environment settings I can provide.

Comment 3 Kamil Dudka 2017-10-27 10:51:43 UTC
(In reply to Rares Vernica from comment #2)
> The proxy is set in the environment variable https_proxy. http_proxy is set
> as well. Both variables use the same server and the same port. The only
> difference between then is https://... vs http://...

While you failed to provide exact values of those environment variables, I think I locally reproduced exactly the behavior you had described in comment #0.

% export https_proxy=https://localhost:3128

% curl -svo/dev/null http://google.com
* Rebuilt URL to: http://google.com/
*   Trying 172.217.21.238...
* TCP_NODELAY set
* Connected to google.com (172.217.21.238) port 80 (#0)
> GET / HTTP/1.1
> Host: google.com
> User-Agent: curl/7.53.1
> Accept: */*
> 
< HTTP/1.1 302 Found
< Cache-Control: private
< Content-Type: text/html; charset=UTF-8
< Referrer-Policy: no-referrer
< Location: http://www.google.cz/?gfe_rd=cr&dcr=0&ei=-g3zWa-FHpPN8geh85TgCQ
< Content-Length: 268
< Date: Fri, 27 Oct 2017 10:44:10 GMT
< 
{ [268 bytes data]
* Connection #0 to host google.com left intact

% curl -svo/dev/null https://google.com
* Rebuilt URL to: https://google.com/
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 3128 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: none
  CApath: none
* loaded libnssckbi.so
* NSS error -12263 (SSL_ERROR_RX_RECORD_TOO_LONG)
* SSL received a record that exceeded the maximum permissible length.
* Closing connection 0

% wget https://google.com
--2017-10-27 12:45:05--  https://google.com/
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:3128... connected.
Proxy request sent, awaiting response... 302 Found
Location: https://www.google.cz/?gfe_rd=cr&dcr=0&ei=MQ7zWcX0J4XN8gep04-IAw [following]
--2017-10-27 12:45:05--  https://www.google.cz/?gfe_rd=cr&dcr=0&ei=MQ7zWcX0J4XN8gep04-IAw
Connecting to localhost (localhost)|::1|:3128... connected.
Proxy request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html’

index.html              [ <=>              ]  11.01K  --.-KB/s    in 0s      

2017-10-27 12:45:05 (128 MB/s) - ‘index.html’ saved [11277]


This can be easily fixed by removing the https:// prefix from $https_proxy:

% export https_proxy=localhost:3128
      
% curl -svo/dev/null https://google.com
* Rebuilt URL to: https://google.com/
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 3128 (#0)
* Establish HTTP proxy tunnel to google.com:443
> CONNECT google.com:443 HTTP/1.1
> Host: google.com:443
> User-Agent: curl/7.53.1
> Proxy-Connection: Keep-Alive
> 
< HTTP/1.1 200 Connection established
< 
* Proxy replied OK to CONNECT request
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: none
  CApath: none
* loaded libnssckbi.so
* ALPN, server accepted to use h2
* SSL connection using TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
* Server certificate:
*       subject: CN=*.google.com,O=Google Inc,L=Mountain View,ST=California,C=US
*       start date: Oct 17 10:30:35 2017 GMT
*       expire date: Dec 29 00:00:00 2017 GMT
*       common name: *.google.com
*       issuer: CN=Google Internet Authority G2,O=Google Inc,C=US
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55d96564d010)
> GET / HTTP/2
> Host: google.com
> User-Agent: curl/7.53.1
> Accept: */*
> 
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 302 
< cache-control: private
< content-type: text/html; charset=UTF-8
< referrer-policy: no-referrer
< location: https://www.google.cz/?gfe_rd=cr&dcr=0&ei=iQ7zWbz2F4zN8geYuoP4Cg
< content-length: 269
< date: Fri, 27 Oct 2017 10:46:33 GMT
< alt-svc: quic=":443"; ma=2592000; v="41,39,38,37,35"
< 
{ [269 bytes data]
* Connection #0 to host localhost left intact


The only reason why it worked with wget is that wget does not support TLS connections to web proxies (unlike curl).

Comment 4 Rares Vernica 2017-10-27 23:48:58 UTC
I am glad you were able to reproduce it. Thanks for the workaround. My setup worked fine for the last few years. I guess something has changed recently.

Comment 5 Kamil Dudka 2017-10-28 06:42:44 UTC
Yes.  The support for HTTPS connections _to_ proxies was introduced in 7.52.0:

https://github.com/curl/curl/pull/1127


Note You need to log in before you can comment on or make changes to this bug.