Bug 137457 - GFS2: [RFE] request for read-only GFS mounts.
GFS2: [RFE] request for read-only GFS mounts.
Product: Fedora
Classification: Fedora
Component: kernel (Show other bugs)
All Linux
medium Severity medium
: ---
: ---
Assigned To: Steve Whitehouse
Cluster QE
: FutureFeature
Depends On: 625123
Blocks: 533192 526947
  Show dependency treegraph
Reported: 2004-10-28 12:34 EDT by Erling Nygaard
Modified: 2014-09-08 20:40 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Erling Nygaard 2004-10-28 12:34:34 EDT
From time to time there are requests for having 'read-only' nodes in a GFS cluster.
Either 'read-only' as a mount option or actually be able to use devices that are
read-only to some nodes. 

Biggest issue I can see is read-only nodes and journal recovery.
Comment 1 Steve Whitehouse 2008-12-10 10:19:04 EST
Updated for GFS2. We need to review both read-only mounts and mounts of read-only block devices (which are subtlelly different) and check that we can support both.
Comment 2 Steve Whitehouse 2009-01-09 08:08:21 EST
Fixed flags, etc. Probably we won't get to this for RHEL 5.4, but we can always change the version back if we do manage that.
Comment 3 Steve Whitehouse 2009-05-21 11:52:33 EDT
This is mostly an audit to make sure that we do things correctly. There should be plenty of time to do this for 5.5
Comment 4 Nate Straz 2009-10-16 12:09:46 EDT
Can you describe this feature in more detail?

Are these read-only nodes part of the cluster according to cman?

What restrictions will apply to read-only nodes?

What happens if all read-write nodes in a cluster fail leaving only read-only nodes?

Do read-only nodes update atime on files?

How would the configuration of a read-only node differ from a read-write node?
Comment 5 Steve Whitehouse 2009-10-16 12:20:13 EDT
As per comment #3 it is just an audit thing. I think we do most of this already.

We just need to check that we match expectations wrt mount read only and mount of read only block device (which is forced by the spectator mount option at the moment).

The same rules apply as per local filesystems, except that:
 1. "spectator" nodes are not assigned a journal and thus may not be remounted as anything other than "spectator"
 2. Read-only and spectator nodes cannot perform journal replay (read-only nodes could potentially do this only at mount time though... need to think about that)
 3. How can we tell that a filesystem is "clean" and thus mountable by nodes which are unable to reply journals?
 4. How to deal with a "mostly read-only" cluster when all the read-write mounts have gone away/failed? This is important for the single read-write, multiple read-only mount web serving case.

Plus we need to document the results in case anything doesn't match current expectations.
Comment 6 Nate Straz 2009-12-09 11:21:22 EST
This feature is poorly defined and needs clarification.

Please create a Product Use Case for this feature describing when this feature would be used, how a customer would configure it, and how the feature should behave under customer loads.

Use the template here:
Comment 7 Steve Whitehouse 2009-12-09 12:15:38 EST
It isn't really that poorly defined. The bug is requesting that we check to ensure that we are meeting expectations when it comes to read only mounts as documented in our docs and in the mount man page. For any undocumented aspects its a question of following the lead of other filesytems and then documenting that. So its an audit thing really.

I don't think the PUC really applies in this case. We already support spectator and read only mounts and we do the right thing at least most of the time. This is about checking the corner cases and seeing if we can improve things.
Comment 8 Steve Whitehouse 2009-12-17 05:19:25 EST
Moving to Fedora since this is a feature thing. We can back port later if required, when the work is complete in upstream.
Comment 9 Steve Whitehouse 2010-08-19 09:14:56 EDT
This is an audit task that is being addressed at the same time as bug #625123

Note You need to log in before you can comment on or make changes to this bug.