Bug 2308300 - [CephFS-Mirror] - Sync duration should be displayed in seconds
Summary: [CephFS-Mirror] - Sync duration should be displayed in seconds
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 8.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 8.0
Assignee: Jos Collin
QA Contact: Hemanth Kumar
URL:
Whiteboard:
: 2312633 (view as bug list)
Depends On:
Blocks: 2312633 2317218
TreeView+ depends on / blocked
 
Reported: 2024-08-28 07:24 UTC by Hemanth Kumar
Modified: 2024-11-25 09:07 UTC (History)
7 users (show)

Fixed In Version: ceph-19.2.0-8.el9cp
Doc Type: Bug Fix
Doc Text:
.`sync_duration` is now calculated in seconds Previously, the sync duration was calculated in milliseconds. This would cause usability issues, as all other calculations were in seconds. With this fix, `sync_duration` is now displayed in seconds.
Clone Of:
: 2312633 (view as bug list)
Environment:
Last Closed: 2024-11-25 09:07:36 UTC
Embargoed:
hyelloji: needinfo+
hyelloji: needinfo+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 68131 0 None None None 2024-09-18 11:55:20 UTC
Red Hat Issue Tracker RHCEPH-9594 0 None None None 2024-08-28 07:25:58 UTC
Red Hat Product Errata RHBA-2024:10216 0 None None None 2024-11-25 09:07:47 UTC

Description Hemanth Kumar 2024-08-28 07:24:27 UTC
Description of problem:
-----------------------

After upgrading the cluster from 7.1z1 to 8.0, and triggered a snapshot sync as part of cephfs mirroring. The sync duration of the new snapshots is not proper and shows an invalid value whereas the sync was completed in 4-5 sec.


Peer Status on 7.z1 Version : 
-----------------------------

Deleted a file and created a snapshot on the subvolume which was part of mirroring : 

[root@magna029 d_000]# rm -rf _magna029_00_95_

[root@magna029 d_000]# ceph fs subvolume snapshot create cephfs2 subvolc2_2 snap_svc2_2_file_remove

[root@magna028 26171d20-f1d5-11ee-b6c2-002590fc2a2e]# ceph --admin-daemon ceph-client.cephfs-mirror.magna028.kxaogg.2.94766774573344.asok fs mirror peer status cephfs2@4 224de958-5b2c-4459-805c-79fca9752835
{
    "/volumes/_nogroup/subvolc2_2": {
        "state": "idle",
        "last_synced_snap": {
            "id": 11,
            "name": "snap_svc2_2_file_remove",
            "sync_duration": 44.572015217000001,
            "sync_time_stamp": "363647.856055s"
        },
        "snaps_synced": 5,
        "snaps_deleted": 0,
        "snaps_renamed": 0
    }
}

It took 44.57 sec to sync to the remote cluster.

--------------------------------------

Upgraded the cluster to 8.0: 
==========================
Repeated the same test on 8.0 build - ceph version 19.1.0-55.el9cp

[root@magna029 d_099]# ls _magna029_00_9944_
_magna029_00_9944_

[root@magna029 d_099]# rm -rf _magna029_00_9944_

[root@magna029 d_099]# ls _magna029_00_9944_
ls: cannot access '_magna029_00_9944_': No such file or directory

[root@magna029 d_099]# ceph fs subvolume snapshot create cephfs2 subvolc2_2 snap_svc2_2_new_file_delete


[root@magna028 26171d20-f1d5-11ee-b6c2-002590fc2a2e]# ceph --admin-daemon ceph-client.cephfs-mirror.magna028.kxaogg.2.93988612716904.asok fs mirror peer status cephfs2@4 224de958-5b2c-4459-805c-79fca9752835
{
    "/volumes/_nogroup/subvolc2_2": {
        "state": "idle",
        "last_synced_snap": {
            "id": 15,
            "name": "snap_svc2_2_new_file_delete",
            "sync_duration": 6831,
            "sync_time_stamp": "367396.355202s",
            "sync_bytes": 0
        },
        "snaps_synced": 4,
        "snaps_deleted": 0,
        "snaps_renamed": 0
    }
}

---
The last sync duration is incorrectly showing as 6831 seconds, even though the sync actually completed in 4-5 seconds.

Comment 20 errata-xmlrpc 2024-11-25 09:07:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 8.0 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:10216


Note You need to log in before you can comment on or make changes to this bug.