Description of problem: When building a python programm with a C extension, the *.so files in /usr/lib*/python2.6/site-packages/* are also detected as providing libraries. This is cleanly wrong, because if ever, it should return something like "python2($lib)". In the find-provides script, there is this so grepping script: solist=$(echo $filelist | grep "\\.so" | grep -v "^/lib/ld.so" | \ xargs file -L 2>/dev/null | grep "ELF.*shared object" | cut -d: -f1) When changing this to: solist=$(echo $filelist | grep "\\.so" | grep -v "^/lib/ld.so" | \ grep -v "^/usr/lib/python" | grep -v "^/usr/lib64/python" | \ xargs file -L 2>/dev/null | grep "ELF.*shared object" | cut -d: -f1) python *.so provides are gone (when disabling the internal dependency tracker and using that modified file as find-provides script) I don't know, why this doesn't work out of the box, when just modifying...
Actually, we should filter out all private object(e.g. java jni files, plugins). Maybe, we can filter out all *.so files. solist=$(echo $filelist | grep "\\.so" | grep -v "^/lib/ld.so" | \ xargs file -L 2>/dev/null | grep "ELF.*shared object" | cut -d: -f1) -> solist=$(echo $filelist | grep "\\.so\\." | grep -v "^/lib/ld.so" | \ xargs file -L 2>/dev/null | grep "ELF.*shared object" | cut -d: -f1)
If you do "chmod -x /path/to/whatever/*.so", then DT_SONAME will not be extracted from python ELF modules when building. Note that find-provides is typically _NOT_ the automagic extraction methods in use. Nor can/will using find-provides.sh lead to a packaging that is "multilib" ready because the bits needed to detect whether dependencies are ELF32 <-> ELF64 will not be generated.
(In reply to comment #1) > Maybe, we can filter out all *.so files. I don't think that will work. "Usual" libraries are in %{_libdir}/foo.so.2.4.2.3.42 (->DOT$anynumber) and a devel subpackage provides a symlink from %{_libdir}/foo.so -> the library with soname. What if upstream creates "bad" libraries and there is only the main %{_libdir}/foo.so? In this case it's not possible to simply skip this library in the provides. Ok, this is a invented issue currently, but because I don't know a guideline, that says such libraries are forbitten, I would not assume, there is no change to get one... It would be better to exclude special libraries, like the python ones. If you want to exclude also other libraries somewhere else, you should also exclude it explicitely for now. (Or ask on the devel list for more input, if such a library like described above shall be declared as "forbitten".) ______________________________________________________________________________ (In reply to comment #2) > If you do "chmod -x /path/to/whatever/*.so", then DT_SONAME > will not be extracted from python ELF modules when building. That's not applicable in the python case, because any python package would need to do this. This is the approbriate place to exclude them system wide. > Note that find-provides is typically _NOT_ the automagic extraction > methods in use. Nor can/will using find-provides.sh lead to a > packaging that is "multilib" ready because the bits needed to detect > whether dependencies are ELF32 <-> ELF64 will not be generated. Again, this counts for the python case, but not on other libs e.g.: $ repoquery --requires libmpc.x86_64 /sbin/ldconfig libc.so.6()(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.3)(64bit) libc.so.6(GLIBC_2.3.4)(64bit) libgmp.so.3()(64bit) libmpfr.so.1()(64bit) rtld(GNU_HASH) $ repoquery --requires libmpc.i686 /sbin/ldconfig libc.so.6 libc.so.6(GLIBC_2.0) libc.so.6(GLIBC_2.1.3) libc.so.6(GLIBC_2.2) libc.so.6(GLIBC_2.3) libc.so.6(GLIBC_2.3.4) libgmp.so.3 libmpfr.so.1 rtld(GNU_HASH) So some libs are correctly detected (the python libs aren't and we will never need such dependencies here, because everything works with "Requires: python-foo"). I'm planning on writing something smarter for the python dependencies, but this *.so files will never ever be used for this...
Define "work" please. For most cases of compiled python modules (which are _NOT_ on ${libdir} paths and typically do _NOT_ have versioning in the file path), doing chmod -x disables RPM dependency extraction. But feel free to define "work" howsoever you wish and invent pathological corner cases that are impediments to perfection in order to justify filtering by path rather than content. In all cases: find-provides.sh pattern filtering isn't what is typically used when building *.rpm packages, and will lead to packages that are not "multilib ready".
(In reply to comment #4) > Define "work" please. Currently, we need to manually Require the dependencies, and "work" is here, that the correct dependencies are added and you finally will have a running system. Automatic provides/requires for python will still take some time to implement, but feel free to discuss this on the fedora-python mailinglist, if you want. This bug report is about getting rid of unused provides of *.so files, which currently are either completely ignored or manually filtered out. > For most cases of compiled python modules (which are _NOT_ > on ${libdir} paths and typically do _NOT_ have versioning in > the file path), doing chmod -x disables RPM dependency extraction. Sure, why not doing unneccessary chmod -x on any python library... Again: This provides of python libs are unused everywhere and can't be savely used anyway, so why emit them? There is absolutely no reason to do so except "because we can". > But feel free to define "work" howsoever you wish and invent pathological > corner cases that are impediments to perfection in order to justify > filtering by path rather than content. I'm filtering by path and content. Python libs are located somewhere in "/usr/lib*/python*". -> That's the path. Python libs are not meant to be required on at all, because multiple packages may define _foo.so libraries, that are then emited and you can't destingush between two _foo.so libs in 2 different python packages, because they have so relation at all. -> That's the content. What's bothering you with this solution?
SUre you can distinguish between 2 _foo.so packages. Add a different DT_SONAME using -Wl,soname and RPM will use that as Provides:. The path and file name are not used when there is a DT_SONAME present. Whether one can distinguish, and whether the Provides: are needed/useful are quite different questions. Doing dependency closure is the only known way to determine whether a Provides: is used. In fact many Provides: in RPM aren't used, but are there to ensure compatibility or are preparing for Yet Another dependency assertion framework. Unused Provides: hurt nothing whatsoever, unmatched Requires: cause FULLSTOP installer failure. Filtering by PATH "works" only until the next change in, say, python *.py nor *.pyc or *.pyx of 2.6 -> 3.0 which happens quite routinely. OTOH, dependencies based on content (like DT_SONAME) continue in spite of path changes or conventional suffix naming. Nothing is bothering me with whatever you Fedorable Packaging Police choose to do. I suggested a chmod -x work-around that exists in RPM currently, and pointed out that find-provides.sh patterns are _NOT_ the usual extraction mechanism in RPM for almost all of this century. But feel free to do whatever you want.
Another solution is only provides libraries in the system libraries path(e.g. /lib /lib64 /usr/lib64). We can add an option to support provides all .so files for special packages(e.g. some packages have files under /etc/ld.so.conf.d/).
Limiting libraries to solely /lib* and /usr/lib* will have exceptions (as you have just pointed out). And handling exceptions requires more complex (and hence buggier) configuration than the simpler rules already implemented in rpm that you seem hell bent on not using: chmod -x /path/to/some/*.so This in fact is the problem with filtering by path: you do need to set-up and maintain the framework to build reliably with all the correct paths, and all the weird little special cases. None of that scales too well. But feel free to do whatever you wish. This is hardly the first time that Newer! Better! Bestest! dependency automation has been proposed to "fix" RPM. But perhaps the 14th (or is it the 30th) proposal here that will work like a charm ...
How difficult is it to detect a foobar.so files that is installed at %python_sitearch ? Maybe we can tweak the auto-provides generator script to convert this into Provides: python2(foobar), as Thomas suggested in the original post. I never did this kind of a work, but it doesn't sound like it would be too difficult. Perl does a similar thing. Java folks were thinking of implementing this too.
(In reply to comment #9) > How difficult is it to detect a foobar.so files that is installed at > %python_sitearch ? > > Maybe we can tweak the auto-provides generator script to convert this into > Provides: python2(foobar), as Thomas suggested in the original post. I never > did this kind of a work, but it doesn't sound like it would be too difficult. Yes, exactly that's what I'm intenting... Everything you can import as a module should be a Provides python(foo) (no matter if it's a foo.so or a foo/__init__.py or a foo.py). But still: Providing foo.so doesn't make sense here. Jeff is right, with distingushing between two foo libraries with appended SONAME, but that is *never* the case in python libs... (More to come on the python devel list.) (In reply to comment #6) > SUre you can distinguish between 2 _foo.so packages. Add > a different DT_SONAME using -Wl,soname and RPM will use > that as Provides:. The path and file name are not used when > there is a DT_SONAME present. Not applicable, see above. > Filtering by PATH "works" only until the next change in, say, > python *.py nor *.pyc or *.pyx of 2.6 -> 3.0 which happens > quite routinely. OTOH, dependencies based on content (like DT_SONAME) > continue in spite of path changes or conventional suffix naming. Every python libs will be in "/usr/lib*/python?.?" with "?.?" = 2.6, 2.7 or 3.2 or something else. So this path will ever be the same. (In reply to comment #8) > But feel free to do whatever you wish. This is hardly the first time > that Newer! Better! Bestest! dependency automation has been proposed > to "fix" RPM. But perhaps the 14th (or is it the 30th) proposal here that > will work like a charm ... ? I don't want to change the dependency automation at all. All I want is to disable library detection for python libs, because currently it's done manually anyway. Once I have a _possible_ good dependency automation for python libs, I'll open another bug for that. (That doesn't belong to here.)
FWIW: The Provides: dependency extraction that you wish to disable retrofits ancient ELF behavior that pre-dates the existing DT_SONAME scheme used by ELF loaders. Basically the file name is used as a soname if DT_SONAME is not present. Ripping out the code (or adding a disabler whose value is default disabled) achieves what you want: no Provides: will be extracted for python ELF modules that do not use DT_SONAME.
(In reply to comment #11) > FWIW: The Provides: dependency extraction that you wish to disable > retrofits ancient ELF behavior that pre-dates the existing DT_SONAME > scheme used by ELF loaders. > > Basically the file name is used as a soname if DT_SONAME is not present. > > Ripping out the code (or adding a disabler whose value is default disabled) > achieves what you want: no Provides: will be extracted for python ELF modules > that do not use DT_SONAME. Yes, and emitting them is a bug, not a feature. It does not make sense to add a disabler in any python package that isn't noarch: $ repoquery --whatrequires "python(abi)" --whatrequires libpython* --enablerepo=rawhide --repoid=rawhide | grep i686 | wc -l 92 I see only one solution to the problem for now - using: (comment #0) > In the find-provides script, there is this so grepping script: > > solist=$(echo $filelist | grep "\\.so" | grep -v "^/lib/ld.so" | \ > xargs file -L 2>/dev/null | grep "ELF.*shared object" | cut -d: -f1) > > When changing this to: > > solist=$(echo $filelist | grep "\\.so" | grep -v "^/lib/ld.so" | \ > grep -v "^/usr/lib/python" | grep -v "^/usr/lib64/python" | \ > xargs file -L 2>/dev/null | grep "ELF.*shared object" | cut -d: -f1)
Careful about how you claim BUG! Legacy compatible behavior is not a BUG! per se. In fact RPM was changed because clueless packagers who hadn't any knowledge re DT_SONAME and -Wl,soname (and your comments indicate that you too are clueless to what is actually implemented and why, see you incorrect claims re _foo.so) expected RPM dependencies to be automatically added when DT_SONAME was not present. The time and place that the behavior is removed is an ELF not RPM issue. And I was suggesting a disabler in RPM which is no python packaging issue.
Ah, so no bug? Let's see, what rpmlint says: $ rpmlint -I private-shared-object-provides private-shared-object-provides: A shared object soname provides is provided by a file in a path from which other packages should not directly load shared objects from. Such shared objects should thus not be depended on and they should not result in provides in the containing package. Get rid of the provides if appropriate, for example by filtering it out during build. Note that in some cases this may require disabling rpmbuild's internal dependency generator. -> "They should not result in provides in the containing package." This all can be avoided, when not using the internal dependency generator on python packages, e.g. by filtering the python *.so files out (which would be the easiest way here). But thanks for getting called clueless, when trying to help solving the issue... All I am saying is "emitting python libs is a bug", it looks like you are taking that to generall and apply it to all libs out there... (In reply to comment #13) > And I was suggesting a disabler in RPM which is no python packaging issue. How would that work? Disable the dependency generator in general for one build or for special paths that can be configured somewhere? In general for one build in the spec file wouldn't work, if a package installs a library into /usr/lib64/ and /usr/lib64/python?.?/site-packages/, because the latter should not get emitted, but the first should. If it's possible to implement such a configurable disabler, that would work too.
I did not say no bug. Read what I wrote. You're welcome for "clueless". May I perhaps add "blind" (because you claim I see only one solution to the problem for now - using: in spite of several alternatives) and "ignorant" (because you haven't bothered to think through the problem that you claim when trying to help solving the issue... ) to the list of calumnies? You are welcome. RPM is chock full of disablers, and has many "features" sic whose time has come and gone. But describing how a disabler could/should/would be implemented in RPM to disable using the file name for a dependency when DT_SONAME is not present in a DSO is clearly just gonna make you go Huh? until you do your homework.
Come on. If this discussion is below your level, simply move on, play with rpm5 and let the RH/Fedora maintainer take care of this issue. Calumny is not an alternative. If you are just huffish, because you may not work for RH anymore, search for a punching bag somewhere else, not RH bugzilla.
I claim that you've earned "ignorant". Have fun!
Jeff, Now, was that necessary? If you keep the attitude, everybody will be ignorant to you. You will simply get ignored. Shall we return to the subject? How can we automate the python dependency detection?
This package has changed ownership in the Fedora Package Database. Reassigning to the new owner of this component.
This bug appears to have been reported against 'rawhide' during the Fedora 19 development cycle. Changing version to '19'. (As we did not run this process for some time, it could affect also pre-Fedora 19 development cycle bugs. We are very sorry. It will help us with cleanup during Fedora 19 End Of Life. Thank you.) More information and reason for this action is here: https://fedoraproject.org/wiki/BugZappers/HouseKeeping/Fedora19
Fixed in rawhide now: http://lists.fedoraproject.org/pipermail/devel/2013-May/182919.html