Bug 1761932 - ctime value is different from atime/mtime on a create of file
Summary: ctime value is different from atime/mtime on a create of file
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: core
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 3
Assignee: Shwetha K Acharya
QA Contact: milind
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-10-15 15:59 UTC by Nag Pavan Chilakam
Modified: 2020-12-17 04:50 UTC (History)
9 users (show)

Fixed In Version: glusterfs-6.0-38
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-12-17 04:50:17 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:5603 0 None None None 2020-12-17 04:50:33 UTC

Description Nag Pavan Chilakam 2019-10-15 15:59:37 UTC
Description of problem:
==================
with ctime feature in (RFE BZ#1691224), ctime/mtime/atime must be consistent on a file create.

However the ctime is slightly different from mtime/atime


Version-Release number of selected component (if applicable):
=================
6.0.15

How reproducible:
==========
always

Steps to Reproduce:
==================
1.create 1x3 volume , enable features.ctime, mount volume
2. touch a file say "f1"
3. do a stat of "f1"

[root@dhcp47-105 atime]# stat ctime
  File: ‘ctime’
  Size: 0         	Blocks: 0          IO Block: 131072 regular empty file
Device: 26h/38d	Inode: 10974573272318328388  Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:fusefs_t:s0
Access: 2019-10-15 21:27:49.739915387 +0530
Modify: 2019-10-15 21:27:49.739915387 +0530
Change: 2019-10-15 21:27:49.740912036 +0530 -----> ctime different from a/mtime
 Birth: -



Addition info:
==============
While, I don't see any implication of the above, however this was to be fixed with ctime feature. 
Don't know if there could be any implication or race conditions or a minute window of causing unexpected issues.

Comment 4 Yaniv Kaul 2020-05-05 14:29:44 UTC
Do we have this test automated?

Comment 5 Nag Pavan Chilakam 2020-05-05 15:49:24 UTC
Not yet automated(In reply to Yaniv Kaul from comment #4)
> Do we have this test automated?

Not yet automated

Comment 13 milind 2020-09-17 08:41:43 UTC
Steps used verify:
        1. Create a volume , enable features.ctime, mount volume
        2. Create a directory "dir1" and check the a|m|c times
        3. Create a file "file1"  and check the a|m|c times
        4. Again create a new file "file2" as below
            command>>> touch file2;stat file2;stat file2
        5. Check the a|m|c times of "file2"

As ctime, mtime, atime are equal and the test-cases are passing 
marking this bug as verified  

Additional info :
=========
rpm -qa | grep -i glusterfs 
glusterfs-6.0-45.el8rhgs.x86_64
glusterfs-fuse-6.0-45.el8rhgs.x86_64
glusterfs-api-6.0-45.el8rhgs.x86_64
glusterfs-selinux-1.0-1.el8rhgs.noarch
glusterfs-client-xlators-6.0-45.el8rhgs.x86_64
glusterfs-server-6.0-45.el8rhgs.x86_64
glusterfs-cli-6.0-45.el8rhgs.x86_64
glusterfs-libs-6.0-45.el8rhgs.x86_64
===============
rpm -qa | grep -i glusterfs
glusterfs-libs-6.0-45.el7rhgs.x86_64
glusterfs-events-6.0-45.el7rhgs.x86_64
glusterfs-client-xlators-6.0-45.el7rhgs.x86_64
glusterfs-cli-6.0-45.el7rhgs.x86_64
glusterfs-rdma-6.0-45.el7rhgs.x86_64
glusterfs-6.0-45.el7rhgs.x86_64
glusterfs-api-6.0-45.el7rhgs.x86_64
glusterfs-geo-replication-6.0-45.el7rhgs.x86_64
glusterfs-fuse-6.0-45.el7rhgs.x86_64
glusterfs-server-6.0-45.el7rhgs.x86_64
===================

test_consistent_timestamps_on_new_entries.py::ConsistentValuesAcrossTimeStamps_cplex_distributed-arbiter_glusterfs::test_time_stamps_on_create PASSED
test_consistent_timestamps_on_new_entries.py::ConsistentValuesAcrossTimeStamps_cplex_distributed-replicated_glusterfs::test_time_stamps_on_create PASSED
test_consistent_timestamps_on_new_entries.py::ConsistentValuesAcrossTimeStamps_cplex_distributed-dispersed_glusterfs::test_time_stamps_on_create PASSED
test_consistent_timestamps_on_new_entries.py::ConsistentValuesAcrossTimeStamps_cplex_replicated_glusterfs::test_time_stamps_on_create PASSED
test_consistent_timestamps_on_new_entries.py::ConsistentValuesAcrossTimeStamps_cplex_distributed_glusterfs::test_time_stamps_on_create PASSED
test_consistent_timestamps_on_new_entries.py::ConsistentValuesAcrossTimeStamps_cplex_dispersed_glusterfs::test_time_stamps_on_create PASSED
test_consistent_timestamps_on_new_entries.py::ConsistentValuesAcrossTimeStamps_cplex_arbiter_glusterfs::test_time_stamps_on_create PASSED

Comment 15 errata-xmlrpc 2020-12-17 04:50:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5603


Note You need to log in before you can comment on or make changes to this bug.