This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 1316234 - RFE : manipulate DHT on local mount to force writing to local storage
RFE : manipulate DHT on local mount to force writing to local storage
Status: ASSIGNED
Product: GlusterFS
Classification: Community
Component: unclassified (Show other bugs)
mainline
All All
medium Severity medium
: ---
: ---
Assigned To: bugs@gluster.org
: FutureFeature, Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-03-09 13:18 EST by Dan
Modified: 2016-07-31 21:23 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Dan 2016-03-09 13:18:42 EST
Description of problem:
When mounting a gluster volume, can't dictate that writes will be local

Version-Release number of selected component (if applicable):
future mainline

How reproducible:
n/a

Steps to Reproduce:
1.n/a

Additional info:

Consider this use case.  Want to have a single visible volume across storage bricks *but* not distribute files on write.

There are a number of Network Video Recorders (NVR).  Each NVR, needs to write to local storage.  Each NVR needs to read from all NVR cluster members so that it can display video incidents in it's UI.  There is a database storing metadata, but that can easily be synced w/ db native tools.

Additionally, some of the NVR might be across relatively slow WAN-vpn or point-to-point links.

DHT hash distribution would saturate those links, while providing nothing useful to each individual NVR.  By using gluster to distribute the volume, each NVR can read and write to a single file store. 

Allowing a volume option like dht.writelocalhost or simply dht.disable 'true' would fit this use case perfectly.


This can be simulated with 2 nodes and nfs+aufs.  On nvr1, mount nvr2:/datastore via nfs to /mnt/nvr2_datastore
mount.aufs -o br=/local_datastore=rw:/mnt/nvr2_datastore=rw none /mnt/aufs_datastoreview.
writes to the aufs mountpoint happen locally but can see remote writes via the underlaid nfs mountpoint.



This can be 'hacked' with gluster
/local_datastore
gluster volume create gluster_datastore nvr1:/local_datastore,nvr2:/local_datastore
use aufs as above, but with /mnt/gluster_datastore instead of /mnt/nvr2_datastore.

aufs writes to /local_datastore, but underlays the gluster_datastore.  because it's 'rw' aufs allows deletes first on local_datastore, then passes through to gluster_datastore.

This actually works, but writing into a gluster brick directly isn't a 'production option'


ultimate goal: write local, read and delete local+global.

Note You need to log in before you can comment on or make changes to this bug.