RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 678580 - suboptimal handling of CURLOPT_SSL_VERIFYPEER
Summary: suboptimal handling of CURLOPT_SSL_VERIFYPEER
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: curl
Version: 6.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Kamil Dudka
QA Contact: BaseOS QE Security Team
URL:
Whiteboard:
Depends On:
Blocks: 670159 692118
TreeView+ depends on / blocked
 
Reported: 2011-02-18 14:11 UTC by Alexander Todorov
Modified: 2020-03-09 15:24 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
When libcurl connected a second time to an SSL server with the same server certificate, the server's certificate was not re-authenticated because libcurl confirmed authenticity before the first connection to the server. This is fixed by disabling the SSL cache when it is not verifying a certificate to force the verification of the certificate on the second use.
Clone Of:
: 692118 (view as bug list)
Environment:
Last Closed: 2011-05-19 13:12:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
anaconda.log (33.88 KB, text/plain)
2011-02-18 14:11 UTC, Alexander Todorov
no flags Details
anaconda.program.log (22.01 KB, text/plain)
2011-02-18 14:12 UTC, Alexander Todorov
no flags Details
anaconda.syslog (65.50 KB, text/plain)
2011-02-18 14:12 UTC, Alexander Todorov
no flags Details
install.log (16.56 KB, text/plain)
2011-02-18 14:12 UTC, Alexander Todorov
no flags Details
bug isolated (1.31 KB, text/plain)
2011-02-22 07:44 UTC, Ales Kozumplik
no flags Details
proposed fix (4.71 KB, patch)
2011-03-07 21:44 UTC, Kamil Dudka
no flags Details | Diff
proposed fix V2 (5.69 KB, patch)
2011-03-08 11:48 UTC, Kamil Dudka
rcritten: review+
Details | Diff
a test-case (1.25 KB, text/plain)
2011-04-11 13:15 UTC, Kamil Dudka
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2011:0573 0 normal SHIPPED_LIVE curl bug fix update 2011-05-18 17:57:02 UTC

Description Alexander Todorov 2011-02-18 14:11:03 UTC
Created attachment 479513 [details]
anaconda.log

Description of problem:

When using https with self-signed certificate one needs to use the --noverifyssl option to the url and repo commands in kickstart. If the user specifies --noverifyssl only for the url command the listed repository is silently accepted although its certificate can't be verified.

Version-Release number of selected component (if applicable):
anaconda-13.21.97-1.el6.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Configure HTTPS with self-signed certificate and put into your ks.cfg:

url --url https://example.com/rhel --noverifyssl
repo --name=LoadBalancer --baseurl=https://example.com/rhel/LoadBalancer/

%packages --ignoremissing
@Core
piranha
%end

Here the repository is loopback mounted DVD under /var/www/html/rhel.

2. Complete the install and reboot.
3.
  
Actual results:
The repository is silently accepted and the piranha package is installed. 

Expected results:
The repository is ignored because its certificate can't be verified (either with an error dialog or silently) and the piranha package is not installed (because of --ignoremissing).

Additional info:
In anaconda.log we find:

14:01:25,571 INFO    : anaconda called with cmdline = ['/usr/bin/anaconda', '--stage2', 'https://dell-per810-01.lab.bos.redhat.com/rhel/images/install.img', '--dlabel', '--kickstart', '/tmp/ks.cfg', '--graphical', '--selinux', '--lang', 'en_US.UTF-8', '--keymap', 'us', '--repo', 'https://dell-per810-01.lab.bos.redhat.com/rhel', '--noverifyssl']

Which sounds like --noverifyssl for url acts like a global parameter.

Comment 1 Alexander Todorov 2011-02-18 14:12:04 UTC
Created attachment 479515 [details]
anaconda.program.log

Comment 2 Alexander Todorov 2011-02-18 14:12:15 UTC
Created attachment 479516 [details]
anaconda.syslog

Comment 3 Alexander Todorov 2011-02-18 14:12:26 UTC
Created attachment 479517 [details]
install.log

Comment 4 Ales Kozumplik 2011-02-22 07:44:19 UTC
Created attachment 480059 [details]
bug isolated

Attaching code that shows that yum (incorrectly) accepts doSackSetup() if only one of the two used https repos has sslverify = False.

Reassigning to yum.

Comment 5 Ales Kozumplik 2011-02-22 07:56:01 UTC
Yum maintainers,

the example in comment 4 shows a call to doSackSetup() after the yumbase object has been populated with two repos. Both of them point to an https://... baseurl, the server has self-signed certificate and is not trusted. Unless I specify sslverify = False for both repos, doSackSetup() (corrrectly) tracebacks (cant download metadata). However if sslverify = False is specified for only the first repo, I would expect the traceback to occur for the second repo, but it does not.

This is why Alex sees --noverifyurl in kickstart to work 'globally' (i.e. if we specify it for the base repo, second repo on the same server is automatically accepted).

Comment 6 James Antill 2011-02-22 17:41:29 UTC
Ok, this looks like it's a problem in curl itself. My guess is that it's due to the fact it's two connections to the same IP address.
As far as I can see we are passing all the data correctly down the the curl layer. And the data we are passing seems to be identical if I disable the id-1 repo. (at which point it fails).


However I will note that from #c0:

anaconda called with cmdline = ['/usr/bin/anaconda',
'--stage2',
'https://dell-per810-01.lab.bos.redhat.com/rhel/images/install.img',
'--dlabel', '--kickstart', '/tmp/ks.cfg', '--graphical', '--selinux', '--lang',
'en_US.UTF-8', '--keymap', 'us', '--repo',
'https://dell-per810-01.lab.bos.redhat.com/rhel', '--noverifyssl']

The last argument here tells anaconda to disable SSL checks for all repos. (AFAICS).

Comment 7 James Antill 2011-02-22 17:44:25 UTC
Interesting to note that the problem remains if I create a new pycurl.Curl() for each download.

Comment 8 Kamil Dudka 2011-02-22 18:34:48 UTC
curl works as documented

Comment 9 seth vidal 2011-02-22 19:57:34 UTC
Kamil,
 here's a pycurl only way to replicate the error on my system.

import pycurl

for (s, val) in [('no verify', False), ('verify', True)]:
    print s
    d = pycurl.Curl()
    d.setopt(pycurl.SSL_VERIFYPEER, val)
    d.setopt(pycurl.SSL_VERIFYHOST, val)
    d.setopt(pycurl.URL, 'https://10.34.39.47/myrepo/')
    try:
        d.perform()
    except:
        print 'Failed'
    else:
        if d.getinfo(pycurl.RESPONSE_CODE) == 200:
            print 'worked'
    print '========================================================='



both connections will work, even though the second one should fail.

now reverse the values in the list at the top:

import pycurl

for (s, val) in [('verify', True), ('no verify', False)]:
    print s
    d = pycurl.Curl()
    d.setopt(pycurl.SSL_VERIFYPEER, val)
    d.setopt(pycurl.SSL_VERIFYHOST, val)
    d.setopt(pycurl.URL, 'https://10.34.39.47/myrepo/')
    try:
        d.perform()
    except:
        print 'Failed'
    else:
        if d.getinfo(pycurl.RESPONSE_CODE) == 200:
            print 'worked'
    print '========================================================='



and it fails properly.

reassigning to curl.

Comment 10 Kamil Dudka 2011-02-22 20:17:11 UTC
Thanks, now I see your concerns.  But you are connecting two times to the same server with the _same_ certificate.  Once you say the server's cert is fine for you, then it is not checked the second time.  Even libcurl is not asked the second time as long as SSL_ClearSessionCache() is not called.  I don't think we should call SSL_ClearSessionCache() per connection as the function acts globally and has crucial performance impact.

But the session cache is cleared if you load CRL, such that it can takes effect.  Maybe this is the way to go for Anaconda, together with CURLOPT_FORBID_REUSE.

Is that a purely synthetic use-case?

Comment 11 seth vidal 2011-02-22 20:22:32 UTC
no.

 we will also have the case of yum connecting to the SAME server with multiple different ssl certificates which will result in DIFFERENT outputs.

Just to be clear.

pycurl.Curl() does not actually create FRESH object values?

Comment 12 James Antill 2011-02-22 20:25:51 UTC
From:

http://www.mozilla.org/projects/security/pki/nss/ref/ssl/sslfnc.html#1138601


You must call SSL_ClearSessionCache after you use one of the SSL Export Policy Functions to change cipher suite policy settings or use SSL_CipherPrefSetDefault to enable or disable any cipher suite. Otherwise, the old settings remain in the session cache and will be used instead of the new settings.

...this pretty clearly _has_ to be called by curl if the caller changes any of:

        self.curl_obj.setopt(pycurl.CAPATH, opts.ssl_ca_cert)
        self.curl_obj.setopt(pycurl.CAINFO, opts.ssl_ca_cert)
        self.curl_obj.setopt(pycurl.SSL_VERIFYPEER, opts.ssl_verify_peer)
        self.curl_obj.setopt(pycurl.SSL_VERIFYHOST, opts.ssl_verify_host)

...and very possibly any of:

        self.curl_obj.setopt(pycurl.SSLKEY, opts.ssl_key)
        self.curl_obj.setopt(pycurl.SSLKEYTYPE, opts.ssl_key_type)
        self.curl_obj.setopt(pycurl.SSLCERT, opts.ssl_cert)
        self.curl_obj.setopt(pycurl.SSLCERTTYPE, opts.ssl_cert_type)
        self.curl_obj.setopt(pycurl.SSLKEYPASSWD, opts.ssl_key_pass)

...anaconda can't do that. urlgrabber could _maybe_ work around it, in the same way curl should, by reloading the CRL ... but it'd be pretty icky. (esp. as we use the default CAPATH atm.)

Comment 13 Kamil Dudka 2011-02-22 20:40:17 UTC
Let's distinguish two things.  If you change SSL_VERIFYPEER/SSL_VERIFYHOST, libcurl does respect the change.  You only can't reject an already accepted certificate.  If you have two distinct servers with two distinct certs, the problem does not exist.

As for CAINFO and CAPATH (CAPATH is broken before RHEL-6.1), they have only incremental semantic from what I know.  There is no way to unload already loaded CA certificates.

I am afraid that call of SSL_ClearSessionCache() per connection would cause more harm.  The function takes no arguments.  It is not connection-, neither libcurl-specific action -- it acts application-globally.  Not yet checked if the function is thread-safe.

Comment 14 Kamil Dudka 2011-02-22 20:43:50 UTC
(In reply to comment #11)
>  we will also have the case of yum connecting to the SAME server with multiple
> different ssl certificates which will result in DIFFERENT outputs.

What you mean by "the SAME server with multiple different ssl certificates"?

Comment 15 seth vidal 2011-02-22 20:47:34 UTC
[repo1]
baseurl=https://myserver.org/path/to/repo1
sslclientkey=repo1_key.pem
sslclientcert=repo1_cert.pem

[repo2]
baseurl=https://myserver.org/path/to/repo2
sslclientkey=repo2_key.pem
sslclientcert=repo2_cert.pem

that's what I mean.

Comment 16 Kamil Dudka 2011-02-22 20:49:41 UTC
These are client certificates, while this bug is about server's cert verification.

Comment 17 seth vidal 2011-02-22 20:58:24 UTC
the above is just an example.

but to be clear - are client certificates stored and not reset when I make a new curl instance? 

Also is there currently a way to call SSL_ClearSessionCache() from the pycurl layer?

Comment 18 Kamil Dudka 2011-02-22 21:11:12 UTC
(In reply to comment #17)
> the above is just an example.
> 
> but to be clear - are client certificates stored and not reset when I make a
> new curl instance?

If you mean the client certificates stored in the NSS database, they are managed independently of curl and we use them in read-only mode.  NSS database is actually the recommended way to use client certificates with NSS powered libcurl.

If you mean the legacy support for client certificates in files, I am not sure how exactly it works in case you operate with more than one client certificate.  The certs are loaded via PEM reader, which is still a work in progress:

https://bugzilla.mozilla.org/show_bug.cgi?id=402712

> Also is there currently a way to call SSL_ClearSessionCache() from the pycurl
> layer?

It should be callable from everywhere as long as you link libssl3.so from nss.  If you are asking where it is safe to call the function, I have no idea.

Comment 19 James Antill 2011-02-22 21:16:27 UTC
> Let's distinguish two things.  If you change SSL_VERIFYPEER/SSL_VERIFYHOST,
> libcurl does respect the change.  You only can't reject an already accepted
> certificate.

That's an interesting definition of "respect the change". Yes, you can change it from "don't accept" to "accept" ... but you can't change it back.

> If you have two distinct servers with two distinct certs, the
> problem does not exist.

That's not really _change_ at that point, though. Also what happens when:

1. You have one server with two different certs.

2. You have two servers with one cert.

...I'd _assume_ that #2 would fail, #1 I'm less sure about.


As another type of workaround ... is there anything that can be done in curl so that we don't "accept the cert. into the cache" if sslverify=false ?

Comment 20 seth vidal 2011-02-22 21:20:55 UTC
(In reply to comment #18)

> If you mean the client certificates stored in the NSS database, they are
> managed independently of curl and we use them in read-only mode.  NSS database
> is actually the recommended way to use client certificates with NSS powered
> libcurl.
> 
> If you mean the legacy support for client certificates in files, I am not sure
> how exactly it works in case you operate with more than one client certificate.
>  The certs are loaded via PEM reader, which is still a work in progress:

Interesting and useful to know - considering a good portion of how yum and the subscription management functions depends on client certs in a file.

 
> > Also is there currently a way to call SSL_ClearSessionCache() from the pycurl
> > layer?
> 
> It should be callable from everywhere as long as you link libssl3.so from nss. 
> If you are asking where it is safe to call the function, I have no idea.


what I am asking is - is this callable from ANYTHING in the curl binding?

If I want to run it from urlgrabber, for example or even from yum, how would I do that?

would you be willing to write the binding into pycurl?

Comment 21 James Antill 2011-02-22 21:24:46 UTC
So if I change Seth's code to:

nss = ctypes.CDLL("libssl3.so")
[...]
    print s
    nss.SSL_ClearSessionCache()
    d = pycurl.Curl()

...then it works, AFAICS. Working around this at the urlgrabber layer feels horribly wrong though.

Comment 22 Kamil Dudka 2011-02-22 21:30:54 UTC
(In reply to comment #19)
> 1. You have one server with two different certs.

I suppose it to work, but don't feel skilled/motivated enough to configure a testing setup myself.

> 2. You have two servers with one cert.

How are you going to put two distinct subjects into a single cert?

> As another type of workaround ... is there anything that can be done in curl so
> that we don't "accept the cert. into the cache" if sslverify=false ?

I am not aware of any.  The caching happens at the NSS level.

Comment 23 James Antill 2011-02-22 21:33:38 UTC
> > 2. You have two servers with one cert.

> How are you going to put two distinct subjects into a single cert?

If we don't verify the cert. then you only need one subject.

Comment 24 Kamil Dudka 2011-02-22 21:39:45 UTC
(In reply to comment #21)
> ...then it works, AFAICS. Working around this at the urlgrabber layer feels
> horribly wrong though.

Such a change is less intrusive than the same change in libcurl, which is used from performance-critical and highly multi-threaded environments, such as java virtual machine, etc.

Comment 25 seth vidal 2011-02-22 21:46:27 UTC
okay - if you don't want to make it happen in curl, that's fine - would be willing to make this something like the method available inside pycurl so we don't have to do that horrible ctypes manip?

Comment 26 Kamil Dudka 2011-02-22 21:56:12 UTC
The only way to do it through the current libcurl API is to load a CRL via CURLOPT_CRLFILE, as mentioned above.  I can't speak for pycurl, try to ask Karel Klic as the maintainer.

Comment 27 Kamil Dudka 2011-02-23 08:05:13 UTC
Elio, the problem is that once we allow to connect with an untrusted peer cert in the SSL_BadCertHook() callback, NSS caches the result and never asks again unless the application-global SSL session cache is cleared.  Is there any way to bypass the caching of the callback's result?

Comment 28 Kamil Dudka 2011-02-23 08:08:27 UTC
s/never asks again/never asks again for the _same_ peer cert/

Comment 29 Elio Maldonado Batiz 2011-02-23 19:21:22 UTC
Having found a way yet, adding Kai.

Comment 30 Kai Engert (:kaie) (inactive account) 2011-03-03 15:38:38 UTC
The following is based on my understanding and after reading  
http://www.mozilla.org/projects/security/pki/nss/ref/ssl/sslfnc.html

The NSS trust system does not remember that the server cert was accepted as good.

However, the NSS SSL implementation uses a session cache, for established SSL connections, for efficiency. The default timeout for SSL3+ sessions is 24 hours.

The session cache is global. I don't see a way to have separate session caches within one process.

I understand you have a single process, and you make connections to a single server, but acting with different roles. In role A, you want to allow the server cert, in role B you want to reject the server cert.

Unfortunately I don't see a way to do this efficiently.

If your application executes the roles A and B strictly serially, then you could call SSL_ClearSessionCache in between.

However, if your sockets that act as roles A and B have overlapping lifetime, I'm worried that behaviour might still be unpredictable, depending on the parallelism of your application.

There is another workaround, you could disable the use of the session cache.

You could disable the session cache globally, using 
  SSL_OptionSetDefault(SSL_NO_CACHE, PR_TRUE);

Or you could disable the session cache for selected sockets (after socket creation, prior to the handshake), by calling:
  SSL_OptionSet(fd, SSL_NO_CACHE, PR_TRUE);


In my understanding, if you want individual decisions per socket, you must disable the session cache and accept the performance penalty.

Comment 31 Kamil Dudka 2011-03-03 16:47:27 UTC
(In reply to comment #30)
> Or you could disable the session cache for selected sockets (after socket
> creation, prior to the handshake), by calling:
>   SSL_OptionSet(fd, SSL_NO_CACHE, PR_TRUE);

Thanks!  This ^^^ looks like the way to go.  I'll check if it does what we need.

Comment 32 James Antill 2011-03-03 17:56:20 UTC
Yeh, just disabling the cache for all "do not check" sockets should do exactly what we want.

Comment 33 Kamil Dudka 2011-03-03 18:53:17 UTC
James, do you mean disabling the cache for all (verifypeer == false) SSL connections by default?  It implies that connections without peer cert verification would be slower than the checked ones, which sounds suboptimal to me...

Comment 34 James Antill 2011-03-03 20:15:50 UTC
Yeh, that's what I meant. I figured that verifypeer==false can't be a common configuration and so having it be a little slower would be fine.

Comment 35 Kamil Dudka 2011-03-03 20:54:10 UTC
Any reference to "common configuration" and "little slower"?

"common configuration" is to use PEM reader to load CA certificates.  The PEM reader still stores objects in an unsorted array.  A single connection to a SSL server does 97 calls of pem_FindObjectsInit() on my box:

NSS_DEBUG_PKCS11_MODULE=PEM \
    curl -so/dev/null https://bugzilla.redhat.com 2>&1 \
    | grep C_FindObjectsInit
C_FindObjectsInit                 97          0 z       0.00us      0.00%

That is, we linearly seek the whole unsorted array of objects 97 times in a row, just to verify a single peer cert.  Doing this in a few hundreds of threads at a time without the cache may not feel like "little slower".  Moreover, doing so when user explicitly asks to not verify the cert, is simply stupid.

Comment 36 Alexander Todorov 2011-03-04 09:20:08 UTC
This is going to be used in Anaconda and we need measurement of how slow it will be (i.e. 1hr vs. 3 hrs.).

Comment 37 Kamil Dudka 2011-03-04 13:35:33 UTC
The proper solution would be to skip peer verification completely with (verifypeer == 0).  I'll have a look at the NSS code and see how to achieve that.

Rob, could you please explain what the following code is supposed to do in BadCertHandler()?  Thanks in advance!

 if(conn->data->set.ssl.certverifyresult!=0)
    return success;

https://github.com/bagder/curl/blob/17de1cc/lib/nss.c#L633

Comment 38 Rob Crittenden 2011-03-04 15:21:54 UTC
I think the idea was that if you've gotten through validation once and an error is set, if you get back into the handler again and an error was set then it means that you ignored the earlier error so just let the request continue.

This could fire for example if the server does a re-handshake.

Comment 39 Kamil Dudka 2011-03-04 21:44:56 UTC
Then it should additionally check at least for (conn->data->set.ssl.certverifyresult == err), otherwise we might mistakenly accept a connection with another error than the caller intended to ignore.  Or am I wrong?

Comment 40 Rob Crittenden 2011-03-04 22:05:43 UTC
You're right, it is a bad assumption that the error will be same ignorable error.

Comment 41 Kamil Dudka 2011-03-07 21:26:59 UTC
I'll attach a patch for libcurl.

Comment 42 Kamil Dudka 2011-03-07 21:44:21 UTC
Created attachment 482794 [details]
proposed fix

When called with verifypeer == 0, the SSL cache is disabled for the particular socket and peer authentication is completely skipped during SSL handshake.  The patch also removes the certverifyresult over-optimization from BadCertHandler().

Comment 47 Kamil Dudka 2011-03-08 11:48:59 UTC
Created attachment 482881 [details]
proposed fix V2

Additionally updated the man page and added a warning about ignored CURLOPT_SSL_VERIFYHOST.

Comment 48 Rob Crittenden 2011-03-08 21:15:32 UTC
Comment on attachment 482881 [details]
proposed fix V2

This looks good but can you explain why you disable the SSL cache completely when not verifying?

Comment 49 Kamil Dudka 2011-03-08 21:41:15 UTC
If I didn't, NSS would remember that the peer certification was "verified" and would not call nss_auth_cert_hook(), neither BadCertHandler() anymore for the _same_ peer certificate.  Then setting verifypeer to non-zero value would be silently ignored when connecting to the same server next time.

Such a patch should not bring any slowdown for untrusted connections, because SSL_AuthCertificate() is not called at all when verifypeer == 0.  I am only not sure if that is the only consequence of SSL_OptionSet(fd, SSL_NO_CACHE, PR_TRUE).  Perhaps NSS guys might know that better.  I wasn't much investigating the NSS code beyond the ssl3_HandleCertificate() function.

Comment 50 Kamil Dudka 2011-03-15 14:57:30 UTC
pushed upstream:

https://github.com/bagder/curl/commit/806dbb0

Comment 53 Alexander Todorov 2011-03-30 13:17:56 UTC
I believe this has been fixed. See:
https://bugzilla.redhat.com/show_bug.cgi?id=692118#c1

Comment 54 Jan Pazdziora 2011-04-04 12:12:11 UTC
I believe this causes bug 690273.

Comment 55 Kamil Dudka 2011-04-11 13:15:00 UTC
Created attachment 491232 [details]
a test-case

You need to link it with -lcurl and give it a https:// URL of an untrusted server.  It first tries an insecure connection, which should succeed.  Then it tries a secure connections, which should fail with CURLE_SSL_CACERT.  If it gets the expected result in both cases, it returns EXIT_SUCCESS.

Comment 57 Misha H. Ali 2011-04-20 05:23:03 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
When libcurl connected a second time to an SSL server with the same server certificate, the server's certificate was not re-authenticated because libcurl confirmed authenticity before the first connection to the server. This is fixed by disabling the SSL cache when it is not verifying a certificate to force the verification of the certificate on the second use.

Comment 58 errata-xmlrpc 2011-05-19 13:12:37 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2011-0573.html


Note You need to log in before you can comment on or make changes to this bug.