Bug 2308300

Summary: [CephFS-Mirror] - Sync duration should be displayed in seconds
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Hemanth Kumar <hyelloji>
Component: CephFSAssignee: Jos Collin <jcollin>
Status: CLOSED ERRATA QA Contact: Hemanth Kumar <hyelloji>
Severity: high Docs Contact:
Priority: unspecified    
Version: 8.0CC: akraj, ceph-eng-bugs, cephqe-warriors, gfarnum, jcollin, tserlin, vshankar
Target Milestone: ---Keywords: Reopened
Target Release: 8.0Flags: hyelloji: needinfo+
hyelloji: needinfo+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-19.2.0-8.el9cp Doc Type: Bug Fix
Doc Text:
.`sync_duration` is now calculated in seconds Previously, the sync duration was calculated in milliseconds. This would cause usability issues, as all other calculations were in seconds. With this fix, `sync_duration` is now displayed in seconds.
Story Points: ---
Clone Of:
: 2312633 (view as bug list) Environment:
Last Closed: 2024-11-25 09:07:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2312633, 2317218    

Description Hemanth Kumar 2024-08-28 07:24:27 UTC
Description of problem:
-----------------------

After upgrading the cluster from 7.1z1 to 8.0, and triggered a snapshot sync as part of cephfs mirroring. The sync duration of the new snapshots is not proper and shows an invalid value whereas the sync was completed in 4-5 sec.


Peer Status on 7.z1 Version : 
-----------------------------

Deleted a file and created a snapshot on the subvolume which was part of mirroring : 

[root@magna029 d_000]# rm -rf _magna029_00_95_

[root@magna029 d_000]# ceph fs subvolume snapshot create cephfs2 subvolc2_2 snap_svc2_2_file_remove

[root@magna028 26171d20-f1d5-11ee-b6c2-002590fc2a2e]# ceph --admin-daemon ceph-client.cephfs-mirror.magna028.kxaogg.2.94766774573344.asok fs mirror peer status cephfs2@4 224de958-5b2c-4459-805c-79fca9752835
{
    "/volumes/_nogroup/subvolc2_2": {
        "state": "idle",
        "last_synced_snap": {
            "id": 11,
            "name": "snap_svc2_2_file_remove",
            "sync_duration": 44.572015217000001,
            "sync_time_stamp": "363647.856055s"
        },
        "snaps_synced": 5,
        "snaps_deleted": 0,
        "snaps_renamed": 0
    }
}

It took 44.57 sec to sync to the remote cluster.

--------------------------------------

Upgraded the cluster to 8.0: 
==========================
Repeated the same test on 8.0 build - ceph version 19.1.0-55.el9cp

[root@magna029 d_099]# ls _magna029_00_9944_
_magna029_00_9944_

[root@magna029 d_099]# rm -rf _magna029_00_9944_

[root@magna029 d_099]# ls _magna029_00_9944_
ls: cannot access '_magna029_00_9944_': No such file or directory

[root@magna029 d_099]# ceph fs subvolume snapshot create cephfs2 subvolc2_2 snap_svc2_2_new_file_delete


[root@magna028 26171d20-f1d5-11ee-b6c2-002590fc2a2e]# ceph --admin-daemon ceph-client.cephfs-mirror.magna028.kxaogg.2.93988612716904.asok fs mirror peer status cephfs2@4 224de958-5b2c-4459-805c-79fca9752835
{
    "/volumes/_nogroup/subvolc2_2": {
        "state": "idle",
        "last_synced_snap": {
            "id": 15,
            "name": "snap_svc2_2_new_file_delete",
            "sync_duration": 6831,
            "sync_time_stamp": "367396.355202s",
            "sync_bytes": 0
        },
        "snaps_synced": 4,
        "snaps_deleted": 0,
        "snaps_renamed": 0
    }
}

---
The last sync duration is incorrectly showing as 6831 seconds, even though the sync actually completed in 4-5 seconds.

Comment 20 errata-xmlrpc 2024-11-25 09:07:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 8.0 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:10216