Red Hat Bugzilla – Bug 137457
GFS2: [RFE] request for read-only GFS mounts.
Last modified: 2014-09-08 20:40:48 EDT
From time to time there are requests for having 'read-only' nodes in a GFS cluster.
Either 'read-only' as a mount option or actually be able to use devices that are
read-only to some nodes.
Biggest issue I can see is read-only nodes and journal recovery.
Updated for GFS2. We need to review both read-only mounts and mounts of read-only block devices (which are subtlelly different) and check that we can support both.
Fixed flags, etc. Probably we won't get to this for RHEL 5.4, but we can always change the version back if we do manage that.
This is mostly an audit to make sure that we do things correctly. There should be plenty of time to do this for 5.5
Can you describe this feature in more detail?
Are these read-only nodes part of the cluster according to cman?
What restrictions will apply to read-only nodes?
What happens if all read-write nodes in a cluster fail leaving only read-only nodes?
Do read-only nodes update atime on files?
How would the configuration of a read-only node differ from a read-write node?
As per comment #3 it is just an audit thing. I think we do most of this already.
We just need to check that we match expectations wrt mount read only and mount of read only block device (which is forced by the spectator mount option at the moment).
The same rules apply as per local filesystems, except that:
1. "spectator" nodes are not assigned a journal and thus may not be remounted as anything other than "spectator"
2. Read-only and spectator nodes cannot perform journal replay (read-only nodes could potentially do this only at mount time though... need to think about that)
3. How can we tell that a filesystem is "clean" and thus mountable by nodes which are unable to reply journals?
4. How to deal with a "mostly read-only" cluster when all the read-write mounts have gone away/failed? This is important for the single read-write, multiple read-only mount web serving case.
Plus we need to document the results in case anything doesn't match current expectations.
This feature is poorly defined and needs clarification.
Please create a Product Use Case for this feature describing when this feature would be used, how a customer would configure it, and how the feature should behave under customer loads.
Use the template here:
It isn't really that poorly defined. The bug is requesting that we check to ensure that we are meeting expectations when it comes to read only mounts as documented in our docs and in the mount man page. For any undocumented aspects its a question of following the lead of other filesytems and then documenting that. So its an audit thing really.
I don't think the PUC really applies in this case. We already support spectator and read only mounts and we do the right thing at least most of the time. This is about checking the corner cases and seeing if we can improve things.
Moving to Fedora since this is a feature thing. We can back port later if required, when the work is complete in upstream.
This is an audit task that is being addressed at the same time as bug #625123