Bug 1114031 - build against openblas/atlas
Summary: build against openblas/atlas
Keywords:
Status: CLOSED RAWHIDE
Alias: None
Product: Fedora EPEL
Classification: Fedora
Component: scalapack
Version: el6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Tom "spot" Callaway
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-06-27 13:49 UTC by Dave Love
Modified: 2016-11-28 18:32 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-28 18:32:14 UTC
Type: Bug


Attachments (Terms of Use)

Description Dave Love 2014-06-27 13:49:34 UTC
When #1113567 is fixed, please consider building against a fast BLAS -- either
openblas or atlas, as appropriate for the architecture.  Obviously it would be
better if you could just use a fast implementation as libblas (as on Debian),
but that doesn't work with the current packaging.

Comment 1 Tom "spot" Callaway 2014-06-27 14:01:13 UTC
I'm hesitant to get into a "but my BLAS is faster/better/stronger/prettier" battle over this... but if there are some real performance numbers around the different FOSS BLAS implementations, I'd consider it.

Comment 2 Dave Love 2014-06-30 10:40:29 UTC
(In reply to Tom "spot" Callaway from comment #1)
> I'm hesitant to get into a "but my BLAS is faster/better/stronger/prettier"
> battle over this... but if there are some real performance numbers around
> the different FOSS BLAS implementations, I'd consider it.

You're not an HPC person, and I claim my £5 :-).  This should be made
irrelevant by a policy that supports configuring any free BLAS as the system
version, with a good architecture-specific default, which is probably what
you're getting at.

The reference BLAS is well-known to be typically a factor of several slower than
tuned ones, though I don't know how much that's reflected in typical scalapack
performance, and I don't think it's relevant unless the .so is linked with
-lblas.

On our sandybridge compute nodes with EPEL6 packages, openblas gives 15200 MFLOPS
on DP linpack, while the reference gives 3900.  openblas is essentially the
same as proprietary BLAS on the sort of things that typically use most cycles.
There are official figures at <http://www.openblas.net/dgemm_snb_1thread.png>
for what I assume is a faster core in turbo mode.

I tried the atlas-sse3 package, and got inconsistent results I don't immediately
understand, but it will be substantially slower and is much harder to deal with
as it doesn't dispatch on the architecture.  Atlas is currently needed for ARM,
at least.

Comment 3 Tom "spot" Callaway 2014-06-30 19:49:54 UTC
(In reply to Dave Love from comment #2)

> You're not an HPC person, and I claim my £5 :-).  This should be made
> irrelevant by a policy that supports configuring any free BLAS as the system
> version, with a good architecture-specific default, which is probably what
> you're getting at.

Guilty as charged. My understanding was that the various BLAS implementations _should_ be drop-in replacements for libblas.so, is that incorrect? Your message implies that is not the case...

It would be interesting to know exactly how Debian handles this.

Comment 4 Dave Love 2014-07-01 15:28:32 UTC
(In reply to Tom "spot" Callaway from comment #3)
> Guilty as charged. My understanding was that the various BLAS
> implementations _should_ be drop-in replacements for libblas.so, is that
> incorrect? Your message implies that is not the case...

Reference blas is libblas, atlas is libf77blas+libatlas.so, openblas is libopenblas.  Also atlas has it's own optimized liblapack, and libopenblas
bundles one.  Similarly for the cblas variants.

A program using several numerical libraries might end up linking against all
three, and I'm not sure what happens then.

> It would be interesting to know exactly how Debian handles this.

The Debian versions all supply libblas with the right soname, and you can use
the alternatives system to point to the one appropriate for the system (which
I think will always be openblas for x86_64).  I started trying to sort out EPEL
packages to do the same, and should get back to it as an alternative to just
rebuilding against openblas.

Comment 5 Dave Love 2014-12-12 14:34:56 UTC
This is more relevant now that the package explicitly links to reference BLAS.
I could supply changes for openblas, as I've had to rebuild, though it looks as
if it still needs Atlas or reference on non-x86, which must account for a rather
small fraction of the computation done with this package.

[Horrified to notice I used a grocers' apostrophe in a previous comment!]

Comment 6 Ryan H. Lewis (rhl) 2016-07-30 01:24:13 UTC
Hi, 
 
I would +1 this type of change. One thorough option is to use proper so-naming to have basically all of the variants, similar to how MPI is being packaged. However, it seems that there is some ABI compatibility so that things can be swapped out: https://mail.scipy.org/pipermail/numpy-discussion/2014-March/069733.html

Comment 7 Dave Love 2016-08-10 13:14:02 UTC
(In reply to Ryan H. Lewis (rhl) from comment #6)
> Hi, 
>  
> I would +1 this type of change. One thorough option is to use proper
> so-naming to have basically all of the variants, similar to how MPI is being
> packaged.

I'm not sure what that means as the MPI situation is rather different.

> However, it seems that there is some ABI compatibility so that
> things can be swapped out:
> https://mail.scipy.org/pipermail/numpy-discussion/2014-March/069733.html

A proposal to make the BLAS/LAPACK implementations interchangeable in Fedora was rejected
in committee a while ago, but orion has a redux somewhere that I should prod him about.

See (In reply to Ryan H. Lewis (rhl) from comment #6)
> Hi, 
>  
> I would +1 this type of change. One thorough option is to use proper
> so-naming to have basically all of the variants, similar to how MPI is being
> packaged.

I'm not sure what that means as the MPI situation is rather different.

> However, it seems that there is some ABI compatibility so that
> things can be swapped out:
> https://mail.scipy.org/pipermail/numpy-discussion/2014-March/069733.html

A proposal to make the BLAS/LAPACK implementations interchangeable in Fedora,
as in Debian, was rejected in committee a while ago, but orion has a redux
somewhere that I should prod him about.

See https://loveshack.fedorapeople.org/blas-subversion.html about a workaround
that's been running on our HPC system for some time.

Comment 8 Susi Lehtola 2016-11-24 21:56:44 UTC
Linking scalapack to reference BLAS is pretty much nonsensical, since ATLAS and OpenBLAS are typically at least an order of magnitude faster for matrix multiplies etc. The whole point of scalapack is to be able to handle large systems faster, but using the slowest BLAS possible makes this impossible.

Comment 9 Tom "spot" Callaway 2016-11-28 18:32:14 UTC
I've linked scalapack to OpenBLAS in rawhide (scalapack-2.0.2-19.fc26).  Please test and confirm that things work well, and reopen if not.


Note You need to log in before you can comment on or make changes to this bug.