Bug 1318493 - Introduce ctime-xlator to return correct (client-side set) ctime
Summary: Introduce ctime-xlator to return correct (client-side set) ctime
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Mohammed Rafi KC
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1221099 1298724
TreeView+ depends on / blocked
 
Reported: 2016-03-17 04:39 UTC by Niels de Vos
Modified: 2019-11-05 03:00 UTC (History)
18 users (show)

Fixed In Version: glusterfs-5.0
Clone Of:
Environment:
Last Closed: 2018-10-24 10:30:02 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1419733 0 unspecified CLOSED GlusterFS truncates nanoseconds to microseconds when setting mtime 2021-02-22 00:41:40 UTC

Internal Links: 1419733 1426548

Description Niels de Vos 2016-03-17 04:39:44 UTC
Description of problem:
Gluster uses a POSIX filesystem to store files on the bricks. The atime and mtime can be set with a SETATTR call, which translates to the utimes() systemcall.

Some applications that do attribute caching (like NFS-clients, FS-Cache, ...) invalidate the caches too often, because the ctime returned by a replicated volume can be different depending on the subvolumes/bricks that the attributes are read from.

Some user application have checks for ctime as well. One of the commonly used applications that spits out a warning about different ctimes is "tar". When a tar archive is created, the process does:
  1. check ctime
  2. read file (and store it in the archive)
  3. check ctime again
If the ctime is different in step (1) and (3), tar returns a warning:
  tar: <filename> file changed as we read it

To prevent this, Gluster should always return the ctime that was set by the client-side. One possible way to do this, is by storing the ctime in an extended attribute (like "trusted.gluster.ctime").

In addition, there are several Gluster internal processes that cause the ctime (on the brick, XFS filesystem) to be changed. Self-heal, rebalance and promotion/demotion with Tiering are two of the most common ones.

Comment 1 Niels de Vos 2016-09-08 16:03:32 UTC
In the addition to ctime (change time), we can also add the btime (birth time, time of creation).

Comment 2 Takeshi Larsson 2017-02-28 07:09:47 UTC
See also: https://bugzilla.redhat.com/show_bug.cgi?id=1426548

Could we increase severity and get this done? Seems rather like a bug to me than RFE. Its not just elasticsearch which checks ctime/mtime/atime.

Comment 3 Mohammed Rafi KC 2017-03-02 07:39:49 UTC
I have started a mail thread in gluster-devel for design discussion [1]. I will brief out the design once it finalized from gluste-devel mailing list.


[1] : http://lists.gluster.org/pipermail/gluster-devel/2017-February/052190.html

Comment 4 Takeshi Larsson 2017-03-02 07:48:35 UTC
Fantastic! Thanks Rafi!

(In reply to Mohammed Rafi KC from comment #3)
> I have started a mail thread in gluster-devel for design discussion [1]. I
> will brief out the design once it finalized from gluste-devel mailing list.
> 
> 
> [1] :
> http://lists.gluster.org/pipermail/gluster-devel/2017-February/052190.html

Comment 7 Magnus Glantz 2017-09-01 06:44:37 UTC
Hey Mohammed, what's the status on this?

Comment 8 Mohammed Rafi KC 2017-09-11 09:28:21 UTC
(In reply to Magnus Glantz from comment #7)
> Hey Mohammed, what's the status on this?

I have completed the feature with basic functionality, there are three patches 

1) A new client side module to handle the and co-ordinate the time attributes
     https://review.gluster.org/#/c/18138/
2) Changes to the management code to add the new xlator module to the graph
     https://review.gluster.org/#/c/18154/
3) Changes to the posix layer where the times are stored as extended attributes
     https://review.gluster.org/#/c/17224/


All of these patches are under review. And it seems to me that the priority has dropped for this.

Let me know if you have any queries or requirements.

Comment 11 Rubin Simons 2019-01-23 10:39:12 UTC
Hello, I seem to be hit by exactly this issue (GlusterFS 3.8.4 using OpenShift 3.9, Elasticsearch 6.5.4); I see that those patches have been adandoned. 

It seems such a fundamental issue that I would assume would impact a lot more than just Elasticsearch/Lucene/Solr users; has this ever been fixed in later upstream versions of GlusterFS? Are there any mount options that can help avoid the problem?

Comment 12 Mohammed Rafi KC 2019-01-23 12:51:42 UTC
This is already fixed in the latest releases starting from gluster-5. The patches you mentioned got automatically abandoned because of 90 days inactivity. But later this was added with other change-id. One among them is https://review.gluster.org/#/c/glusterfs/+/19857/.

Comment 13 Amar Tumballi 2019-01-23 12:59:51 UTC
@Rubin, considering you are using OpenShift, can you test if things would work fine with containers from https://github.com/gluster/gcs project? They have latest glusterfs image (nightly), and you should be able to pick a RWX PV without issues right now.

Comment 14 Rubin Simons 2019-01-24 12:22:58 UTC
Hi Mohammed: Are you sure this is only since GlusterFS 5.x? I ask because of this:

    https://github.com/amarts/glusterfs/commit/0d1dbf034a4a75ff0ebd74b7218193c00b506247

Which seems to mention GD_OP_VERSION_4_1_0 for the (then) new ctime feature, of which the description is: 

"When this option is enabled, time attributes (ctime,mtime,atime) are stored in xattr to keep it consistent across replica and distribute set. The time attributes stored at the backend are not considered."

Comment 15 Amar Tumballi 2019-01-24 12:29:40 UTC
Hi Rubin,

We did get the feature in 4_1_0, but it was not enabled by default. In 5_0 version, it was enabled by default. In glusterfs-5.0 version, at least we have made more testing etc (http://hrkscribbles.blogspot.com/2018/11/elastic-search-on-gluster.html)

Comment 16 Rubin Simons 2019-01-24 16:17:11 UTC
Hi Amar and Mohammed;

I'm trying to see how the glusterfs version spoken about above, i.e, version 4.1 and 5.x relates to Red Hat's GlusterFS package version. For example, the latest supported package version for the Red Hat GlusterFS product is 3.12.2-25 (that's for "Red Hat Cluster Storage version 3.4 Batch 1 Update", source: https://access.redhat.com/articles/2356261)

Am I right in understanding that Red Hat's supported GlusterFS product is nowhere near version 4.1 (I hope not)?


Note You need to log in before you can comment on or make changes to this bug.