Red Hat Bugzilla – Bug 1316234
RFE : manipulate DHT on local mount to force writing to local storage
Last modified: 2016-07-31 21:23:04 EDT
Description of problem:
When mounting a gluster volume, can't dictate that writes will be local
Version-Release number of selected component (if applicable):
Steps to Reproduce:
Consider this use case. Want to have a single visible volume across storage bricks *but* not distribute files on write.
There are a number of Network Video Recorders (NVR). Each NVR, needs to write to local storage. Each NVR needs to read from all NVR cluster members so that it can display video incidents in it's UI. There is a database storing metadata, but that can easily be synced w/ db native tools.
Additionally, some of the NVR might be across relatively slow WAN-vpn or point-to-point links.
DHT hash distribution would saturate those links, while providing nothing useful to each individual NVR. By using gluster to distribute the volume, each NVR can read and write to a single file store.
Allowing a volume option like dht.writelocalhost or simply dht.disable 'true' would fit this use case perfectly.
This can be simulated with 2 nodes and nfs+aufs. On nvr1, mount nvr2:/datastore via nfs to /mnt/nvr2_datastore
mount.aufs -o br=/local_datastore=rw:/mnt/nvr2_datastore=rw none /mnt/aufs_datastoreview.
writes to the aufs mountpoint happen locally but can see remote writes via the underlaid nfs mountpoint.
This can be 'hacked' with gluster
gluster volume create gluster_datastore nvr1:/local_datastore,nvr2:/local_datastore
use aufs as above, but with /mnt/gluster_datastore instead of /mnt/nvr2_datastore.
aufs writes to /local_datastore, but underlays the gluster_datastore. because it's 'rw' aufs allows deletes first on local_datastore, then passes through to gluster_datastore.
This actually works, but writing into a gluster brick directly isn't a 'production option'
ultimate goal: write local, read and delete local+global.