Bug 2018439
| Summary: | gluster lib cannot be dlopened: /lib64/libtcmalloc.so.4: cannot allocate memory in static TLS block | ||
|---|---|---|---|
| Product: | [Fedora] Fedora | Reporter: | Richard W.M. Jones <rjones> |
| Component: | glusterfs | Assignee: | Kaleb KEITHLEY <kkeithle> |
| Status: | CLOSED RAWHIDE | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | rawhide | CC: | amarts, anoopcs, congxueyang, humble.devassy, jonathansteffan, kkeithle, matthias, mtasaka, ndevos, ramkrsna |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-10.0-0.3rc0 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-10-29 17:11:26 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2018182 | ||
|
Description
Richard W.M. Jones
2021-10-29 10:37:50 UTC
Also this lengthy bug report is interesting: https://sourceware.org/bugzilla/show_bug.cgi?id=11787 Reproducer (thanks Dan Berrange):
#include <stdio.h>
#include <dlfcn.h>
#include <assert.h>
int main(int argc, char **argv)
{
void *h = dlopen("/usr/lib64/libglusterfs.so", RTLD_NOW);
fprintf (stderr, "dlerror = %s\n", dlerror());
return 0;
}
$ gcc -Wall gtest.c -o gtest -ldl -g
$ ./gtest
dlerror = /lib64/libtcmalloc.so.4: cannot allocate memory in static TLS block
Perhaps not using tcmalloc fixes this issue? The following is for jmalloc, but looks very similar: https://github.com/jemalloc/jemalloc/issues/1237 (In reply to Mamoru TASAKA from comment #3) > Perhaps not using tcmalloc fixes this issue? > > The following is for jmalloc, but looks very similar: > https://github.com/jemalloc/jemalloc/issues/1237 It would be nice to revert the tcmalloc change so that we can get a new build of libvirt and libguestfs. But I'm not going to do that without the say-so of the gluster maintainer. tcmalloc is disabled in all arches except x86_64 in glusterfs-10.0-0.3rc0 Please give it a try and let me know if it doesn't work. I will leave this BZ open for a bit. If I hear nothing by the EOD today (29 Oct) I will close it as CLOSED/RAWHIDE. $ koji download-build glusterfs-10.0-0.3rc0.fc36 --arch=aarch64
...
$ sudo dnf update *.aarch64.rpm
...
Upgrading:
glusterfs aarch64 10.0-0.3rc0.fc36 @commandline 591 k
glusterfs-cli aarch64 10.0-0.3rc0.fc36 @commandline 181 k
glusterfs-client-xlators aarch64 10.0-0.3rc0.fc36 @commandline 832 k
glusterfs-extra-xlators aarch64 10.0-0.3rc0.fc36 @commandline 39 k
glusterfs-fuse aarch64 10.0-0.3rc0.fc36 @commandline 135 k
libgfapi-devel aarch64 10.0-0.3rc0.fc36 @commandline 25 k
libgfapi0 aarch64 10.0-0.3rc0.fc36 @commandline 90 k
libgfchangelog-devel aarch64 10.0-0.3rc0.fc36 @commandline 13 k
libgfchangelog0 aarch64 10.0-0.3rc0.fc36 @commandline 37 k
libgfrpc-devel aarch64 10.0-0.3rc0.fc36 @commandline 43 k
libgfrpc0 aarch64 10.0-0.3rc0.fc36 @commandline 56 k
libgfxdr-devel aarch64 10.0-0.3rc0.fc36 @commandline 11 k
libgfxdr0 aarch64 10.0-0.3rc0.fc36 @commandline 30 k
libglusterd0 aarch64 10.0-0.3rc0.fc36 @commandline 14 k
libglusterfs-devel aarch64 10.0-0.3rc0.fc36 @commandline 117 k
libglusterfs0 aarch64 10.0-0.3rc0.fc36 @commandline 292 k
python3-gluster aarch64 10.0-0.3rc0.fc36 @commandline 18 k
...
$ cat gtest.c
#include <stdio.h>
#include <dlfcn.h>
#include <assert.h>
int main(int argc, char **argv)
{
void *h = dlopen("/usr/lib64/libglusterfs.so", RTLD_NOW);
if (!h) fprintf (stderr, "dlerror = %s\n", dlerror());
return 0;
}
$ gcc -Wall gtest.c -o gtest -ldl -g
$ ./gtest
[no output]
Looks good here. I will build libvirt next (bug 2018182)
https://koji.fedoraproject.org/koji/taskinfo?taskID=78022743 It's looking good so I guess we can close this bug now, thanks. |