Description of problem: When mounting a gluster volume, can't dictate that writes will be local Version-Release number of selected component (if applicable): future mainline How reproducible: n/a Steps to Reproduce: 1.n/a Additional info: Consider this use case. Want to have a single visible volume across storage bricks *but* not distribute files on write. There are a number of Network Video Recorders (NVR). Each NVR, needs to write to local storage. Each NVR needs to read from all NVR cluster members so that it can display video incidents in it's UI. There is a database storing metadata, but that can easily be synced w/ db native tools. Additionally, some of the NVR might be across relatively slow WAN-vpn or point-to-point links. DHT hash distribution would saturate those links, while providing nothing useful to each individual NVR. By using gluster to distribute the volume, each NVR can read and write to a single file store. Allowing a volume option like dht.writelocalhost or simply dht.disable 'true' would fit this use case perfectly. This can be simulated with 2 nodes and nfs+aufs. On nvr1, mount nvr2:/datastore via nfs to /mnt/nvr2_datastore mount.aufs -o br=/local_datastore=rw:/mnt/nvr2_datastore=rw none /mnt/aufs_datastoreview. writes to the aufs mountpoint happen locally but can see remote writes via the underlaid nfs mountpoint. This can be 'hacked' with gluster /local_datastore gluster volume create gluster_datastore nvr1:/local_datastore,nvr2:/local_datastore use aufs as above, but with /mnt/gluster_datastore instead of /mnt/nvr2_datastore. aufs writes to /local_datastore, but underlays the gluster_datastore. because it's 'rw' aufs allows deletes first on local_datastore, then passes through to gluster_datastore. This actually works, but writing into a gluster brick directly isn't a 'production option' ultimate goal: write local, read and delete local+global.
This is the exact usecase of 'nufa' volumes. Can you try testing a volume with 'cluster.nufa enable' (in any version below glusterfs-4.1.x)