Bug 189337

Summary: Missing requires (netcdf) for hdf
Product: [Fedora] Fedora Reporter: Susi Lehtola <susi.lehtola>
Component: hdfAssignee: Orion Poplawski <orion>
Status: CLOSED RAWHIDE QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: medium Docs Contact:
Priority: medium    
Version: rawhideCC: alex, extras-qa, pertusus
Target Milestone: ---Keywords: Reopened
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2008-01-17 22:50:14 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 414441    

Description Susi Lehtola 2006-04-19 10:17:53 UTC
Description of problem:
Trying to compile software that uses e.g. nccreate or ncclose on a system that
doesn't have netcdf and netcdf-devel installed results in undefined references.

Version-Release number of selected component (if applicable):
hdf-4.2r1-9.fc5

How reproducible:
Always.

Comment 1 Patrice Dumas 2006-04-20 22:55:51 UTC
That seems quite normal to me. ncclose and nccreate are part of the netcdf 2 
api. So they are only present if netcdf-devel is there. 
The corresponding hdf symbols are prefixed by sd_.

One possibility if you want to have a translation of the symbols is to

#include <hdf2netcdf.h>

and compile with -DHAVE_NETCDF

Comment 2 Orion Poplawski 2006-05-26 20:01:22 UTC
Reopen if comment #1 does not fix.

Comment 3 Patrice Dumas 2007-10-29 11:50:42 UTC
Reopening, since in the new hdf upstream -DHAVE_NETCDF have been
replaced with --disable-netcdf that does something similar, but 
in addition the netcdf include files are not distributed anymore.

When using
#include <hdf2netcdf.h>
there is no need anymore to specify -DHAVE_NETCDF since the right 
preprocessor macro is defined in h4config.h (And if the netcdf.h
from hdf is used, this include is already used).

But since the netcdf include files are not distributed anymore 
as part of hdf with --disable-netcdf, (sd_)ncopen for example 
isn't defined anywhere anymore.

There are 3 possibilities to workaround that issue:
1. depend on netcdf-devel
2. do by hand what isn't done because --disable-netcdf is set
3. modify configure.ac/Makefile.am/Makefile.in such that even with
  --disable-netcdf the netcdf include files are installed

I dislike very much the solution 1. because there is no reason to 
depend on netcdf-devel and the netcdf include files are in netcdf-3
directory in that case, which is impractical.

I have a slight preference for 3.

An additional issue is that there is no practical way to have the
netcdf2 fortran api in hdf while avoiding name clashes with
netcdf since there is no preprocessing setup in the netcdf2 headers
provided with hdf (that would be similar with <hdf2netcdf.h>). We 
could do that ourselves, but I think that it is too much work.

Comment 4 Patrice Dumas 2007-10-29 11:54:27 UTC
(In reply to comment #3)

> I have a slight preference for 3.

In fact since we cannot ship the netcdf2 hdf fortran api,
I prefer 2., that is install netcdf.h by hand.

I will do that in devel.

Comment 5 Patrice Dumas 2007-10-29 12:37:27 UTC
Proposed update for F-8 in bodhi too.

Comment 6 Alex Lancaster 2007-12-31 01:40:21 UTC
This causes a regression for gdal.  gdal will no longer compile on rawhide (bug
#414441):

http://koji.fedoraproject.org/koji/taskinfo?taskID=284310

the error is:

hdf4dataset.cpp: In member function 'CPLErr
HDF4Dataset::ReadGlobalAttributes(int32)':
hdf4dataset.cpp:576: error: 'MAX_NC_NAME' was not declared in this scope
hdf4dataset.cpp:589: error: 'szAttrName' was not declared in this scope

I also verified that recompiling gdal on F-8 locally also caused the same error.

I asked on the #gdal IRC channel and they suggested it was because of not
including HAVE_NETCDF, which was removed in the following change:

* Mon Oct 29 2007 Patrice Dumas <pertusus> 4.2r2-4
- install the netcdf.h file that describes the netcdf2 hdf enabled
  API

* Mon Oct 29 2007 Patrice Dumas <pertusus> 4.2r2-3
- ship hdf enabled nc* utils as hnc*
- add --disable-netcdf that replaces HAVE_NETCDF
- keep include files timestamps, and have the same accross arches
- fix multiarch difference in include files (#341491)

* Wed Oct 17 2007 Patrice Dumas <pertusus> 4.2r2-2
- update to 4.2r2

It appears that something in this change (perhaps dropping HAVE_NETCDF) is the
culprit, so it appears that the --disable-netcdf may not work properly?    In
any case, backing out these changes (or dropping back to the earlier package)
causes gdal to recompile fine, so something in this change introduced the gdal
build failure.

I tried recompiling hdf with the -DHAVE_NETCDF restored with the following
patch, but I got the same error:

Index: hdf.spec
===================================================================
RCS file: /cvs/pkgs/rpms/hdf/devel/hdf.spec,v
retrieving revision 1.20
diff -u -r1.20 hdf.spec
--- hdf.spec    29 Oct 2007 12:02:04 -0000      1.20
+++ hdf.spec    31 Dec 2007 01:38:56 -0000
@@ -1,6 +1,6 @@
 Name: hdf
 Version: 4.2r2
-Release: 4%{?dist}
+Release: 5%{?dist}
 Summary: A general purpose library and file format for storing scientific data
 License: BSD
 Group: System Environment/Libraries
@@ -48,7 +48,7 @@
 %build
 # avoid upstream compiler flags settings
 rm config/*linux-gnu
-export CFLAGS="$RPM_OPT_FLAGS -fPIC"
+export CFLAGS="$RPM_OPT_FLAGS -fPIC -DHAVE_NETCDF"
 export FFLAGS="$RPM_OPT_FLAGS -ffixed-line-length-none"
 %configure F77=gfortran --disable-production --disable-netcdf \
  --includedir=%{_includedir}/%{name} --libdir=%{_libdir}/%{name}
@@ -107,6 +107,9 @@
 
 
 %changelog
+* Sun Dec 30 2007 Alex Lancaster <alexlan> 4.2r2-5
+- Add back HAVE_NETCDF preprocessor directive, needed for gdal build
+
 * Mon Oct 29 2007 Patrice Dumas <pertusus> 4.2r2-4
 - install the netcdf.h file that describes the netcdf2 hdf enabled
   API


Comment 7 Alex Lancaster 2007-12-31 01:41:29 UTC
That would be bug #414441 (link wasn't generated properly last time).

Comment 8 Patrice Dumas 2007-12-31 08:21:24 UTC
HAVE_NETCDF is not present anymore in hdf. It is replaced by 
--disable-netcdf configure flag. In that case H4_HAVE_NETCDF
is not defined. 

MAX_NC_NAME is not defined through hdf.h, (in our case) but 
it is defined in the netcdf.h header shipped with hdf.

But my understanding of gdal is that it doesn't use the netcdf
api of hdf, but instead uses directly the hdf api. In that
case, my guess is that it should use
H4_MAX_NC_NAME instead of MAX_NC_NAME
Maybe something along

#ifndef MAX_NC_NAME
#define MAX_NC_NAME H4_MAX_NC_NAME
#endif

could do it.

Comment 9 Alex Lancaster 2007-12-31 08:42:05 UTC
(In reply to comment #8)
I'm not a gdal maintainer, so I don't quite know how all the pieces fit
together, but I'm trying to pick up the slack given that gdal's primary
maintainer Cristian Balint seems to have gone AWOL (no replies to bugzilla, or
builds since August 2007).  Wha

> HAVE_NETCDF is not present anymore in hdf. It is replaced by 
> --disable-netcdf configure flag. In that case H4_HAVE_NETCDF
> is not defined. 
> 
> MAX_NC_NAME is not defined through hdf.h, (in our case) but 
> it is defined in the netcdf.h header shipped with hdf.
> 
> But my understanding of gdal is that it doesn't use the netcdf
> api of hdf, but instead uses directly the hdf api. 

I think that's right, AFAICT from:

http://trac.osgeo.org/gdal/wiki/HDF

> In that case, my guess is that it should use
> H4_MAX_NC_NAME instead of MAX_NC_NAME
> Maybe something along
> 
> #ifndef MAX_NC_NAME
> #define MAX_NC_NAME H4_MAX_NC_NAME
> #endif
> 
> could do it.

Sounds sensible, are you suggesting that this should be done in gdal, rather
than hdf?

Comment 10 Alex Lancaster 2007-12-31 08:59:11 UTC
Adding:

export CPPFLAGS="$CPPFLAGS -DMAX_NC_NAME=H4_MAX_NC_NAME"

to the gdal.spec file gets further, but still fails to compile, now get:

hdf4dataset.cpp: In static member function 'static GDALDataset*
HDF4Dataset::Open(GDALOpenInfo*)':
hdf4dataset.cpp:746: error: 'MAX_VAR_DIMS' was not declared in this scope
hdf4dataset.cpp:831: error: 'aiDimSizes' was not declared in this scope
hdf4dataset.cpp:931: error: 'aiDimSizes' was not declared in this scope
hdf4dataset.cpp:982: error: 'aiDimSizes' was not declared in this scope
hdf4dataset.cpp:1004: error: 'aiDimSizes' was not declared in this scope
hdf4dataset.cpp:1033: error: 'aiDimSizes' was not declared in this scope
hdf4dataset.cpp:1040: error: 'aiDimSizes' was not declared in this scope


Comment 11 Patrice Dumas 2008-01-01 23:16:25 UTC
It is the same issue. Here are all the netcdf symbols 
define you may want to do:

#define MAX_NC_OPEN  H4_MAX_NC_OPEN
#define MAX_NC_DIMS  H4_MAX_NC_DIMS
#define MAX_NC_VARS  H4_MAX_NC_VARS
#define MAX_NC_NAME  H4_MAX_NC_NAME
#define MAX_NC_CLASS H4_MAX_NC_CLASS
#define MAX_VAR_DIMS H4_MAX_VAR_DIMS

I think that doing the define on the command line is 
dangerous because they shouldn't be done when building
against the real netcdf. They should only be done in
the gdal header files that include the hdf header
files. Or maybe somewhere else, but not globally.

Comment 12 Alex Lancaster 2008-01-03 23:05:30 UTC
It looks like Mamoru Tasaka has fixed this in gdal in bug #414441 comment #5 I
converted this into a diff against the gdal.spec in attachment #290803 [details].  Once 
gdal is pushed in rawhide, I'll (re-)close this bug.

Comment 13 Patrice Dumas 2008-01-17 20:49:07 UTC
Is it fixed?

Comment 14 Alex Lancaster 2008-01-17 22:50:14 UTC
Yep, all good (now that gdal is pushed).  Closing bug.