Hide Forgot
I have a 4 node cluster in test production and this is quite the problem. Linux Fedora 10/11 client Fuse 2.74 Gluster 2.0.3 Gentoo 2.6.27-gentoo-r8 server Gluster 2.0.3 When mounted native the filesystem does not complete the writing of Point Cloud files. When mounted via CIFS (glusterfs exported via Samba) it writes the files. I started an strace of the process but it crashed. Too many steps before it actually gets to the Point Cloud generation. Renderman Studio runs via Maya so that is VERY slow on the strace as well. Any ideas of how to solve this problem? Everything thing else is working. Had a problem with Houdini Mantra in Gluster 2.0.1 that was fixed with an upgrade to 2.0.3. I was hoping 2.0.4 would fix this problem but no dice. What other Info would you like? Client Config volume remote01 type protocol/client option transport-type tcp option remote-host slave14 option remote-subvolume brick01 end-volume volume remote02 type protocol/client option transport-type tcp option remote-host slave15 option remote-subvolume brick02 end-volume volume remote03 type protocol/client option transport-type tcp option remote-host slave16 option remote-subvolume brick03 end-volume volume remote04 type protocol/client option transport-type tcp option remote-host slave20 option remote-subvolume brick04 end-volume volume distribute type cluster/distribute subvolumes remote01 remote02 remote03 remote04 end-volume volume writebehind type performance/write-behind option aggregate-size 128KB option window-size 1MB subvolumes distribute end-volume volume cache type performance/io-cache option cache-size 128MB subvolumes writebehind end-volume Server Config volume posix type storage/posix option directory /node end-volume volume locks type features/locks subvolumes posix end-volume volume brick01 type performance/io-threads option thread-count 8 subvolumes locks end-volume volume server type protocol/server option transport-type tcp option auth.addr.brick01.allow 127.0.0.1,192.168.1* subvolumes brick01 end-volume
Hi Todd Are you still experiencing this problem in recent releases? Thanks
Insufficient data / missing log files