Bug 646043 - Parallel Support for HDF5
Summary: Parallel Support for HDF5
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: hdf5
Version: 14
Hardware: x86_64
OS: Linux
low
medium
Target Milestone: ---
Assignee: Orion Poplawski
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-10-24 00:53 UTC by David Brown
Modified: 2010-12-08 17:21 UTC (History)
2 users (show)

Fixed In Version: hdf5-1.8.5.patch1-4.fc14
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-11-10 21:42:05 UTC
Type: ---


Attachments (Terms of Use)
first stab and generating parallel hdf5 builds (4.87 KB, patch)
2010-10-24 00:53 UTC, David Brown
no flags Details | Diff

Description David Brown 2010-10-24 00:53:00 UTC
Created attachment 455299 [details]
first stab and generating parallel hdf5 builds

Description of problem:
mpich2 and openmpi are supported however there isn't a build of hdf5 against either mpi.

Version-Release number of selected component (if applicable):
1.8.5-1.fc14

How reproducible:
very much.

Steps to Reproduce:
1. install fedora 14
2. install openmpi and mpich2
3. search for hdf5-mpich2 or hdf5-openmpi and they aren't there.
  
Actual results:
Nothing

Expected results:
hdf5-mpich2 hdf5-openmpi

Additional info:
So, I've got a patch to the spec file as a first stab at building hdf5 against the mpi's. Any comments or suggestions are welcome. I installed the headers and libraries in parallel next to the mpich2 and openmpi libraries and headers. It doesn't make much sense to make separate directories for every package that could build against any MPI available in fedora. As a user of MPI if I want mpich2 I want everything that could work with it in one place. Keeping them separate means that someone expects that they could be mixed and matched, this isn't the case with MPI. Mixing scalapack-openmpi with an app built with mpich2 doesn't work and will never be expected to work, so just install scalapack in the same directory as openmpi and be done with it.

Just my thoughts.

Comment 1 Orion Poplawski 2010-10-26 22:49:28 UTC
Thanks for starting on this.  I've reworked this a bit to make it more to my liking.

Rawhide build:
http://koji.fedoraproject.org/koji/taskinfo?taskID=2555970

F14 build:
http://koji.fedoraproject.org/koji/taskinfo?taskID=2556030

Please test this out and let me know how it works.

Comment 2 David Brown 2010-10-28 03:31:01 UTC
Looks fine to me.


$ mpirun -np 4 h5perf -e $(( 128 * 1024 * 1024 )) -d 4
HDF5 Library: Version 1.8.5-patch1
rank 0: ==== Parameters ====
rank 0: IO API=posix mpiio phdf5 
rank 0: Number of files=1
rank 0: Number of datasets=4
rank 0: Number of iterations=1
rank 0: Number of processes=1:4
rank 0: Number of bytes per process per dataset=128MB
rank 0: Size of dataset(s)=128MB:512MB
rank 0: File size=512MB:2GB
rank 0: Transfer buffer size=64MB:128MB
rank 0: Block size=64MB
rank 0: Block Pattern in Dataset=Contiguous
rank 0: I/O Method for MPI and HDF5=Independent
rank 0: Geometry=1D
rank 0: VFL used for HDF5 I/O=MPI-I/O driver
rank 0: Data storage method in HDF5=Contiguous
rank 0: Env HDF5_PARAPREFIX=not set
rank 0: Dumping MPI Info Object(469762048) (up to 1024 bytes per item):
object is MPI_INFO_NULL
rank 0: ==== End of Parameters ====

Number of processors = 4
Transfer Buffer Size: 67108864 bytes, File size: 2048.00 MBs
      # of files: 1, # of datasets: 4, dataset size: 512.00 MBs
        IO API = POSIX
            Write (1 iteration(s)):
                Maximum Throughput: 513.11 MB/s
                Average Throughput: 513.11 MB/s
                Minimum Throughput: 513.11 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput: 513.11 MB/s
                Average Throughput: 513.11 MB/s
                Minimum Throughput: 513.11 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 4743.62 MB/s
                Average Throughput: 4743.62 MB/s
                Minimum Throughput: 4743.62 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 4743.46 MB/s
                Average Throughput: 4743.46 MB/s
                Minimum Throughput: 4743.46 MB/s
        IO API = MPIO
            Write (1 iteration(s)):
                Maximum Throughput:  87.80 MB/s
                Average Throughput:  87.80 MB/s
                Minimum Throughput:  87.80 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput:  87.80 MB/s
                Average Throughput:  87.80 MB/s
                Minimum Throughput:  87.80 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 4681.86 MB/s
                Average Throughput: 4681.86 MB/s
                Minimum Throughput: 4681.86 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 4680.71 MB/s
                Average Throughput: 4680.71 MB/s
                Minimum Throughput: 4680.71 MB/s
        IO API = PHDF5 (w/MPI-I/O driver)
            Write (1 iteration(s)):
                Maximum Throughput: 511.34 MB/s
                Average Throughput: 511.34 MB/s
                Minimum Throughput: 511.34 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput:  78.16 MB/s
                Average Throughput:  78.16 MB/s
                Minimum Throughput:  78.16 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 5696.18 MB/s
                Average Throughput: 5696.18 MB/s
                Minimum Throughput: 5696.18 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 5620.00 MB/s
                Average Throughput: 5620.00 MB/s
                Minimum Throughput: 5620.00 MB/s
Transfer Buffer Size: 134217728 bytes, File size: 2048.00 MBs
      # of files: 1, # of datasets: 4, dataset size: 512.00 MBs
        IO API = POSIX
            Write (1 iteration(s)):
                Maximum Throughput: 617.11 MB/s
                Average Throughput: 617.11 MB/s
                Minimum Throughput: 617.11 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput: 617.10 MB/s
                Average Throughput: 617.10 MB/s
                Minimum Throughput: 617.10 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 4599.13 MB/s
                Average Throughput: 4599.13 MB/s
                Minimum Throughput: 4599.13 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 4598.97 MB/s
                Average Throughput: 4598.97 MB/s
                Minimum Throughput: 4598.97 MB/s
        IO API = MPIO
            Write (1 iteration(s)):
                Maximum Throughput: 641.09 MB/s
                Average Throughput: 641.09 MB/s
                Minimum Throughput: 641.09 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput: 641.05 MB/s
                Average Throughput: 641.05 MB/s
                Minimum Throughput: 641.05 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 4679.88 MB/s
                Average Throughput: 4679.88 MB/s
                Minimum Throughput: 4679.88 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 4678.72 MB/s
                Average Throughput: 4678.72 MB/s
                Minimum Throughput: 4678.72 MB/s
        IO API = PHDF5 (w/MPI-I/O driver)
            Write (1 iteration(s)):
                Maximum Throughput: 384.36 MB/s
                Average Throughput: 384.36 MB/s
                Minimum Throughput: 384.36 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput:  77.48 MB/s
                Average Throughput:  77.48 MB/s
                Minimum Throughput:  77.48 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 5592.08 MB/s
                Average Throughput: 5592.08 MB/s
                Minimum Throughput: 5592.08 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 5580.84 MB/s
                Average Throughput: 5580.84 MB/s
                Minimum Throughput: 5580.84 MB/s
Number of processors = 2
Transfer Buffer Size: 67108864 bytes, File size: 1024.00 MBs
      # of files: 1, # of datasets: 4, dataset size: 256.00 MBs
        IO API = POSIX
            Write (1 iteration(s)):
                Maximum Throughput: 1469.05 MB/s
                Average Throughput: 1469.05 MB/s
                Minimum Throughput: 1469.05 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput: 1468.96 MB/s
                Average Throughput: 1468.96 MB/s
                Minimum Throughput: 1468.96 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 4265.44 MB/s
                Average Throughput: 4265.44 MB/s
                Minimum Throughput: 4265.44 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 4265.23 MB/s
                Average Throughput: 4265.23 MB/s
                Minimum Throughput: 4265.23 MB/s
        IO API = MPIO
            Write (1 iteration(s)):
                Maximum Throughput: 1471.17 MB/s
                Average Throughput: 1471.17 MB/s
                Minimum Throughput: 1471.17 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput: 1470.86 MB/s
                Average Throughput: 1470.86 MB/s
                Minimum Throughput: 1470.86 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 4260.97 MB/s
                Average Throughput: 4260.97 MB/s
                Minimum Throughput: 4260.97 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 4259.46 MB/s
                Average Throughput: 4259.46 MB/s
                Minimum Throughput: 4259.46 MB/s
        IO API = PHDF5 (w/MPI-I/O driver)
            Write (1 iteration(s)):
                Maximum Throughput: 1465.63 MB/s
                Average Throughput: 1465.63 MB/s
                Minimum Throughput: 1465.63 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput:  78.16 MB/s
                Average Throughput:  78.16 MB/s
                Minimum Throughput:  78.16 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 4626.89 MB/s
                Average Throughput: 4626.89 MB/s
                Minimum Throughput: 4626.89 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 4611.41 MB/s
                Average Throughput: 4611.41 MB/s
                Minimum Throughput: 4611.41 MB/s
Transfer Buffer Size: 134217728 bytes, File size: 1024.00 MBs
      # of files: 1, # of datasets: 4, dataset size: 256.00 MBs
        IO API = POSIX
            Write (1 iteration(s)):
                Maximum Throughput: 1435.79 MB/s
                Average Throughput: 1435.79 MB/s
                Minimum Throughput: 1435.79 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput: 1435.71 MB/s
                Average Throughput: 1435.71 MB/s
                Minimum Throughput: 1435.71 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 3967.38 MB/s
                Average Throughput: 3967.38 MB/s
                Minimum Throughput: 3967.38 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 3967.15 MB/s
                Average Throughput: 3967.15 MB/s
                Minimum Throughput: 3967.15 MB/s
        IO API = MPIO
            Write (1 iteration(s)):
                Maximum Throughput: 1399.54 MB/s
                Average Throughput: 1399.54 MB/s
                Minimum Throughput: 1399.54 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput: 1399.26 MB/s
                Average Throughput: 1399.26 MB/s
                Minimum Throughput: 1399.26 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 3982.81 MB/s
                Average Throughput: 3982.81 MB/s
                Minimum Throughput: 3982.81 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 3981.39 MB/s
                Average Throughput: 3981.39 MB/s
                Minimum Throughput: 3981.39 MB/s
        IO API = PHDF5 (w/MPI-I/O driver)
            Write (1 iteration(s)):
                Maximum Throughput: 1310.08 MB/s
                Average Throughput: 1310.08 MB/s
                Minimum Throughput: 1310.08 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput:  76.30 MB/s
                Average Throughput:  76.30 MB/s
                Minimum Throughput:  76.30 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 4304.38 MB/s
                Average Throughput: 4304.38 MB/s
                Minimum Throughput: 4304.38 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 4290.44 MB/s
                Average Throughput: 4290.44 MB/s
                Minimum Throughput: 4290.44 MB/s
Number of processors = 1
Transfer Buffer Size: 67108864 bytes, File size: 512.00 MBs
      # of files: 1, # of datasets: 4, dataset size: 128.00 MBs
        IO API = POSIX
            Write (1 iteration(s)):
                Maximum Throughput: 1356.09 MB/s
                Average Throughput: 1356.09 MB/s
                Minimum Throughput: 1356.09 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput: 1355.96 MB/s
                Average Throughput: 1355.96 MB/s
                Minimum Throughput: 1355.96 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 2885.46 MB/s
                Average Throughput: 2885.46 MB/s
                Minimum Throughput: 2885.46 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 2885.27 MB/s
                Average Throughput: 2885.27 MB/s
                Minimum Throughput: 2885.27 MB/s
        IO API = MPIO
            Write (1 iteration(s)):
                Maximum Throughput: 1352.55 MB/s
                Average Throughput: 1352.55 MB/s
                Minimum Throughput: 1352.55 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput: 1352.11 MB/s
                Average Throughput: 1352.11 MB/s
                Minimum Throughput: 1352.11 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 2879.54 MB/s
                Average Throughput: 2879.54 MB/s
                Minimum Throughput: 2879.54 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 2878.41 MB/s
                Average Throughput: 2878.41 MB/s
                Minimum Throughput: 2878.41 MB/s
        IO API = PHDF5 (w/MPI-I/O driver)
            Write (1 iteration(s)):
                Maximum Throughput: 1349.63 MB/s
                Average Throughput: 1349.63 MB/s
                Minimum Throughput: 1349.63 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput:  93.68 MB/s
                Average Throughput:  93.68 MB/s
                Minimum Throughput:  93.68 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 3213.96 MB/s
                Average Throughput: 3213.96 MB/s
                Minimum Throughput: 3213.96 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 3199.20 MB/s
                Average Throughput: 3199.20 MB/s
                Minimum Throughput: 3199.20 MB/s
Transfer Buffer Size: 134217728 bytes, File size: 512.00 MBs
      # of files: 1, # of datasets: 4, dataset size: 128.00 MBs
        IO API = POSIX
            Write (1 iteration(s)):
                Maximum Throughput: 1287.61 MB/s
                Average Throughput: 1287.61 MB/s
                Minimum Throughput: 1287.61 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput: 1287.51 MB/s
                Average Throughput: 1287.51 MB/s
                Minimum Throughput: 1287.51 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 2558.70 MB/s
                Average Throughput: 2558.70 MB/s
                Minimum Throughput: 2558.70 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 2558.53 MB/s
                Average Throughput: 2558.53 MB/s
                Minimum Throughput: 2558.53 MB/s
        IO API = MPIO
            Write (1 iteration(s)):
                Maximum Throughput: 1295.39 MB/s
                Average Throughput: 1295.39 MB/s
                Minimum Throughput: 1295.39 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput: 1295.00 MB/s
                Average Throughput: 1295.00 MB/s
                Minimum Throughput: 1295.00 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 2542.27 MB/s
                Average Throughput: 2542.27 MB/s
                Minimum Throughput: 2542.27 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 2541.41 MB/s
                Average Throughput: 2541.41 MB/s
                Minimum Throughput: 2541.41 MB/s
        IO API = PHDF5 (w/MPI-I/O driver)
            Write (1 iteration(s)):
                Maximum Throughput: 1297.95 MB/s
                Average Throughput: 1297.95 MB/s
                Minimum Throughput: 1297.95 MB/s
            Write Open-Close (1 iteration(s)):
                Maximum Throughput:  92.10 MB/s
                Average Throughput:  92.10 MB/s
                Minimum Throughput:  92.10 MB/s
            Read (1 iteration(s)):
                Maximum Throughput: 2834.48 MB/s
                Average Throughput: 2834.48 MB/s
                Minimum Throughput: 2834.48 MB/s
            Read Open-Close (1 iteration(s)):
                Maximum Throughput: 2822.99 MB/s
                Average Throughput: 2822.99 MB/s
                Minimum Throughput: 2822.99 MB/s
$

I did lots of I/O and it worked using mpiio, posix and special hdf5 parallel I/O.
So it seems to work to me.

Thanks for being excepting and responsive.

Comment 3 Fedora Update System 2010-10-28 15:31:21 UTC
hdf5-1.8.5.patch1-4.fc14 has been submitted as an update for Fedora 14.
https://admin.fedoraproject.org/updates/hdf5-1.8.5.patch1-4.fc14

Comment 4 Fedora Update System 2010-10-28 22:16:57 UTC
hdf5-1.8.5.patch1-4.fc14 has been pushed to the Fedora 14 testing repository.  If problems still persist, please make note of it in this bug report.
 If you want to test the update, you can install it with 
 su -c 'yum --enablerepo=updates-testing update hdf5'.  You can provide feedback for this update here: https://admin.fedoraproject.org/updates/hdf5-1.8.5.patch1-4.fc14

Comment 5 Fedora Update System 2010-11-02 22:16:26 UTC
hdf5-1.8.5.patch1-4.fc14, R-hdf5-1.6.9-9.fc14 has been pushed to the Fedora 14 testing repository.  If problems still persist, please make note of it in this bug report.
 If you want to test the update, you can install it with 
 su -c 'yum --enablerepo=updates-testing update hdf5 R-hdf5'.  You can provide feedback for this update here: https://admin.fedoraproject.org/updates/R-hdf5-1.6.9-9.fc14,hdf5-1.8.5.patch1-4.fc14

Comment 6 Fedora Update System 2010-11-10 21:42:00 UTC
hdf5-1.8.5.patch1-4.fc14, R-hdf5-1.6.9-9.fc14 has been pushed to the Fedora 14 stable repository.  If problems still persist, please make note of it in this bug report.

Comment 7 David Brown 2010-11-10 22:11:31 UTC
So is hdf5 part of RHEL6 or is that still an EPEL thing?

Comment 8 Orion Poplawski 2010-11-10 23:43:16 UTC
EPEL.  Unfortunately it appears that the configure trick I used in the Fedora spec doesn't work for EL6 so it will take some work to get this built for EL6.

Comment 9 Orion Poplawski 2010-12-08 17:21:24 UTC
hdf5-1.8.5.patch1-5.el6 has been built for EL6


Note You need to log in before you can comment on or make changes to this bug.