Bug 1316234

Summary: RFE : manipulate DHT on local mount to force writing to local storage
Product: [Community] GlusterFS Reporter: Dan <dandenson>
Component: distributeAssignee: bugs <bugs>
Status: CLOSED DEFERRED QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: mainlineCC: atumball, bugs, nbalacha, rgowdapp, smohan, spalai
Target Milestone: ---Keywords: FutureFeature, Triaged
Target Release: ---   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-10-08 16:48:34 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Dan 2016-03-09 18:18:42 UTC
Description of problem:
When mounting a gluster volume, can't dictate that writes will be local

Version-Release number of selected component (if applicable):
future mainline

How reproducible:
n/a

Steps to Reproduce:
1.n/a

Additional info:

Consider this use case.  Want to have a single visible volume across storage bricks *but* not distribute files on write.

There are a number of Network Video Recorders (NVR).  Each NVR, needs to write to local storage.  Each NVR needs to read from all NVR cluster members so that it can display video incidents in it's UI.  There is a database storing metadata, but that can easily be synced w/ db native tools.

Additionally, some of the NVR might be across relatively slow WAN-vpn or point-to-point links.

DHT hash distribution would saturate those links, while providing nothing useful to each individual NVR.  By using gluster to distribute the volume, each NVR can read and write to a single file store. 

Allowing a volume option like dht.writelocalhost or simply dht.disable 'true' would fit this use case perfectly.


This can be simulated with 2 nodes and nfs+aufs.  On nvr1, mount nvr2:/datastore via nfs to /mnt/nvr2_datastore
mount.aufs -o br=/local_datastore=rw:/mnt/nvr2_datastore=rw none /mnt/aufs_datastoreview.
writes to the aufs mountpoint happen locally but can see remote writes via the underlaid nfs mountpoint.



This can be 'hacked' with gluster
/local_datastore
gluster volume create gluster_datastore nvr1:/local_datastore,nvr2:/local_datastore
use aufs as above, but with /mnt/gluster_datastore instead of /mnt/nvr2_datastore.

aufs writes to /local_datastore, but underlays the gluster_datastore.  because it's 'rw' aufs allows deletes first on local_datastore, then passes through to gluster_datastore.

This actually works, but writing into a gluster brick directly isn't a 'production option'


ultimate goal: write local, read and delete local+global.

Comment 1 Amar Tumballi 2018-10-08 16:48:34 UTC
This is the exact usecase of 'nufa' volumes.

Can you try testing a volume with 'cluster.nufa enable' (in any version below glusterfs-4.1.x)