Description of problem: Dynamic linking to 'gfapi.so.0' is necessary. A static compilation leads to issues like when gfapi makes a new release, so Samba VFS plugin has to be updated and released as well. This is a problem a static buggy code or a regression would reside in samba vfs until it an eventual re-release is made compiling against the newest. It would be ideally better to have a dynamic dependency so that only 'gfapi' would need updates and all samba needs is a 'restart' Version-Release number of selected component (if applicable): samba-glusterfs-3.6.9-160.3.el6rhs.x86_64.rpm (71,196 bytes) samba-3.6.9-160.3.el6rhs.src.rpm (29,500,289 bytes) How reproducible: Always Steps to Reproduce: 1. # ldd /usr/lib64/samba/vfs/glusterfs.so linux-vdso.so.1 => (0x00007fff9e1ff000) libc.so.6 => /lib64/libc.so.6 (0x00007fa98aab1000) /lib64/ld-linux-x86-64.so.2 (0x0000003ecca00000) 2. # strings /usr/lib64/samba/vfs/glusterfs.so | grep -v struct | grep glfs | wc -l 63 Actual results: No dynamic linking Expected results: Dynamic linking with libgfapi.so.0
When we upgrade RHS where RHS upgrade happens but there is no samba upgrade , the mount of gluster volume fail on the new client and the error seen is as follows: [2015-02-11 11:19:57.901463] W [xlator.c:191:xlator_dynload] 0-xlator: /usr/lib64/glusterfs/3.6.0.40/xlator/mount/api.so: cannot open shared object file: No such file or directory [2015-02-11 11:19:57.901505] E [glfs.c:191:create_master] 0-glfs: master xlator for volume1 initialization failed But the access of volume from the same clients would still work and any new mounts on the same client would work fine. The issue is seen only when the new client tries to mount the volume. The workaround for the issue is to restart smb service. This needs to be fixed for 3.0.4.
Updated the doc text with suggested changes from me, Ira, and Günther.
The suggested information is added in the Release notes as a known issue: http://docbuilder.usersys.redhat.com/22629/#Red_Hat_Storage Do review the same. Thanks.
Looks good to me! :)
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/ If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.