Created attachment 1268659 [details] C file that reproduces the problem. Description of problem: Some values for the cluster.extra-hash-regex option causes segfault when initializing a new virtual mount with libgfapi. Version-Release number of selected component (if applicable): GlusterFS 3.10 How reproducible: Easily, simply need a volume with the extra regex value set to something with backslashes. Steps to Reproduce: 1) Create a new volume 2) Set extra regex option: gluster volume set vol cluster.extra-hash-regex '"(.*)\\.tmp"' 3) Compile the code in attachment 4) Execute it 5) It should fail the 3rd time a virtual mount is created Actual results: [root@node test-native]# segfault-libgfapi Creating virtual mount #0 Created virtual mount #0 Creating virtual mount #1 Created virtual mount #1 Creating virtual mount #2 Segmentation fault Expected results: [root@mtl-perf-assetstore-node01 test-native]# segfault-libgfapi Creating virtual mount #0 Created virtual mount #0 Creating virtual mount #1 Created virtual mount #1 Creating virtual mount #2 Created virtual mount #2 Creating virtual mount #3 Created virtual mount #3 Creating virtual mount #4 Created virtual mount #4 Creating virtual mount #5 Created virtual mount #5 Creating virtual mount #6 Created virtual mount #6 Creating virtual mount #7 Created virtual mount #7 Creating virtual mount #8 Created virtual mount #8 Creating virtual mount #9 Created virtual mount #9 Creating virtual mount #10 Created virtual mount #10 [...] Creating virtual mount #99 Created virtual mount #99 Additional info: A valgrind output was attached along with the code.
It does not crash the 3rd time for me, but it also does not finish the 1000 iterations either... Created virtual mount #201 Creating virtual mount #202 Created virtual mount #202 Creating virtual mount #203 real 0m4.193s user 0m1.873s sys 0m1.307s [root@vm122-138 segfault-libgfapi.d]# echo $? 24 $ errno 24 EMFILE: Too many open files errno 24 seems to be the common return value though... Running under Valgrind exits on creating mount #70 for me. I'm looking into resource leaks already, and will include this test too.
This bug reported is against a version of Gluster that is no longer maintained (or has been EOL'd). See https://www.gluster.org/release-schedule/ for the versions currently maintained. As a result this bug is being closed. If the bug persists on a maintained version of gluster or against the mainline gluster repository, request that it be reopened and the Version field be marked appropriately.