Bug 2303640 - [cephfs][cephfs-journal-tool] cephfs-journal-tool import from invalid file throws unexpected exception
Summary: [cephfs][cephfs-journal-tool] cephfs-journal-tool import from invalid file th...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 7.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 8.1
Assignee: Jos Collin
QA Contact: Hemanth Kumar
Rivka Pollack
URL:
Whiteboard:
Depends On:
Blocks: 2351689
TreeView+ depends on / blocked
 
Reported: 2024-08-08 09:42 UTC by julpark
Modified: 2025-06-26 12:14 UTC (History)
8 users (show)

Fixed In Version: ceph-19.2.1-137.el9cp
Doc Type: Bug Fix
Doc Text:
.Invalid headers no longer cause a segmentation fault during `journal import` Previously, the `cephfs-journal-tool` did not check for headers during a `journal import` operation. This would cause a segmentation fault. With this fix, headers are checked when running the `journal import` command and segmentation faults no longer occur with missing headers.
Clone Of:
Environment:
Last Closed: 2025-06-26 12:14:25 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 68928 0 None None None 2024-11-13 12:17:14 UTC
Red Hat Issue Tracker RHCEPH-10211 0 None None None 2024-11-13 13:14:15 UTC
Red Hat Issue Tracker RHCEPH-10217 0 None None None 2024-11-12 06:16:11 UTC
Red Hat Product Errata RHSA-2025:9775 0 None None None 2025-06-26 12:14:34 UTC

Description julpark 2024-08-08 09:42:38 UTC
Description of problem:

cephfs-journal-tool import from invalid file throws unexpected exception

Version-Release number of selected component (if applicable):

18.2.1-229.el9cp

How reproducible:

create a random file and import it

cephfs-journal-tool --rank cephfs:0  journal import aa

Steps to Reproduce:
1. touch aa
2. cephfs-journal-tool --rank cephfs:0  journal import aa
3.

Actual results:

created 2024-08-05T07:59:22.622741-0400
min_mon_release 18 (reef)
election_strategy: 1
0: [v2:10.0.209.118:3300/0,v1:10.0.209.118:6789/0] mon.ceph-julpark-eu21gr-node1-installer
1: [v2:10.0.208.222:3300/0,v1:10.0.208.222:6789/0] mon.ceph-julpark-eu21gr-node3
2: [v2:10.0.210.28:3300/0,v1:10.0.210.28:6789/0] mon.ceph-julpark-eu21gr-node2

  -112> 2024-08-08T05:29:27.066-0400 7ffa2b038280 10 monclient: _renew_subs
  -111> 2024-08-08T05:29:27.066-0400 7ffa2b038280 10 monclient: _send_mon_message to mon.ceph-julpark-eu21gr-node2 at v2:10.0.210.28:3300/0
  -110> 2024-08-08T05:29:27.066-0400 7ffa24f82640  4 set_mon_vals no callback set
  -109> 2024-08-08T05:29:27.066-0400 7ffa2b038280  1 librados: init done
  -108> 2024-08-08T05:29:27.066-0400 7ffa2b038280  4 main: JournalTool: resolving pool 2
  -107> 2024-08-08T05:29:27.068-0400 7ffa2377f640  4 mgrc handle_mgr_map Got map version 53
  -106> 2024-08-08T05:29:27.068-0400 7ffa2377f640  4 mgrc handle_mgr_map Active mgr is now [v2:10.0.209.118:6800/1582713329,v1:10.0.209.118:6801/1582713329]
  -105> 2024-08-08T05:29:27.068-0400 7ffa2377f640  4 mgrc reconnect Starting new session with [v2:10.0.209.118:6800/1582713329,v1:10.0.209.118:6801/1582713329]
  -104> 2024-08-08T05:29:27.068-0400 7ffa2b038280  4 main: JournalTool: creating IoCtx..
  -103> 2024-08-08T05:29:27.068-0400 7ffa2b038280  4 main: Executing for rank 0
  -102> 2024-08-08T05:29:27.068-0400 7ffa2b038280  5 AuthRegistry(0x560d3d4184d0) adding auth protocol: cephx
  -101> 2024-08-08T05:29:27.068-0400 7ffa2b038280  5 AuthRegistry(0x560d3d4184d0) adding auth protocol: cephx
  -100> 2024-08-08T05:29:27.068-0400 7ffa2b038280  5 AuthRegistry(0x560d3d4184d0) adding auth protocol: cephx
   -99> 2024-08-08T05:29:27.068-0400 7ffa2b038280  5 AuthRegistry(0x560d3d4184d0) adding auth protocol: none
   -98> 2024-08-08T05:29:27.068-0400 7ffa2b038280  5 AuthRegistry(0x560d3d4184d0) adding con mode: secure
   -97> 2024-08-08T05:29:27.068-0400 7ffa2b038280  5 AuthRegistry(0x560d3d4184d0) adding con mode: crc
   -96> 2024-08-08T05:29:27.068-0400 7ffa2b038280  5 AuthRegistry(0x560d3d4184d0) adding con mode: secure
   -95> 2024-08-08T05:29:27.068-0400 7ffa2b038280  5 AuthRegistry(0x560d3d4184d0) adding con mode: crc
   -94> 2024-08-08T05:29:27.068-0400 7ffa2b038280  5 AuthRegistry(0x560d3d4184d0) adding con mode: secure
   -93> 2024-08-08T05:29:27.068-0400 7ffa2b038280  5 AuthRegistry(0x560d3d4184d0) adding con mode: crc
   -92> 2024-08-08T05:29:27.068-0400 7ffa2b038280  5 AuthRegistry(0x560d3d4184d0) adding con mode: crc
   -91> 2024-08-08T05:29:27.068-0400 7ffa2b038280  5 AuthRegistry(0x560d3d4184d0) adding con mode: secure
   -90> 2024-08-08T05:29:27.068-0400 7ffa2b038280  5 AuthRegistry(0x560d3d4184d0) adding con mode: crc
   -89> 2024-08-08T05:29:27.068-0400 7ffa2b038280  5 AuthRegistry(0x560d3d4184d0) adding con mode: secure
   -88> 2024-08-08T05:29:27.068-0400 7ffa2b038280  5 AuthRegistry(0x560d3d4184d0) adding con mode: crc
   -87> 2024-08-08T05:29:27.068-0400 7ffa2b038280  5 AuthRegistry(0x560d3d4184d0) adding con mode: secure
   -86> 2024-08-08T05:29:27.068-0400 7ffa2b038280  2 auth: KeyRing::load: loaded key file /etc/ceph/ceph.client.admin.keyring
   -85> 2024-08-08T05:29:27.068-0400 7ffa2b038280  5 asok(0x560d3c776000) register_command objecter_requests cmddesc objecter_requests hook 0x560d3d477830 EEXIST
   -84> 2024-08-08T05:29:27.068-0400 7ffa2b038280 10 monclient: build_initial_monmap
   -83> 2024-08-08T05:29:27.068-0400 7ffa2b038280  1 build_initial for_mkfs: 0
   -82> 2024-08-08T05:29:27.068-0400 7ffa2b038280 10 monclient: monmap:
epoch 0
fsid 00000000-0000-0000-0000-000000000000
last_changed 0.000000
created 0.000000
min_mon_release 0 (unknown)
election_strategy: 1
0: [v2:10.0.208.222:3300/0,v1:10.0.208.222:6789/0] mon.noname-c
1: [v2:10.0.209.118:3300/0,v1:10.0.209.118:6789/0] mon.noname-a
2: [v2:10.0.210.28:3300/0,v1:10.0.210.28:6789/0] mon.noname-b

Expected results:

it should not let it import something invalid

Additional info:

   1/ 5 mgrc
   1/ 5 dpdk
   1/ 5 eventtrace
   1/ 5 prioritycache
   0/ 5 test
   0/ 5 cephfs_mirror
   0/ 5 cephsqlite
   0/ 5 seastore
   0/ 5 seastore_onode
   0/ 5 seastore_odata
   0/ 5 seastore_omap
   0/ 5 seastore_tm
   0/ 5 seastore_t
   0/ 5 seastore_cleaner
   0/ 5 seastore_epm
   0/ 5 seastore_lba
   0/ 5 seastore_fixedkv_tree
   0/ 5 seastore_cache
   0/ 5 seastore_journal
   0/ 5 seastore_device
   0/ 5 seastore_backref
   0/ 5 alienstore
   1/ 5 mclock
   0/ 5 cyanstore
   1/ 5 ceph_exporter
   1/ 5 memstore
  -2/-2 (syslog threshold)
  99/99 (stderr threshold)
--- pthread ID / name mapping for recent threads ---
  7ffa1ef76640 / fn_mds_utility
  7ffa20779640 / ms_dispatch
  7ffa20f7a640 / io_context_pool
  7ffa2377f640 / ms_dispatch
  7ffa24f82640 / io_context_pool
  7ffa26f86640 / ms_dispatch
  7ffa27787640 / io_context_pool
  7ffa27f88640 / ceph_timer
  7ffa28789640 / msgr-worker-2
  7ffa28f8a640 / msgr-worker-1
  7ffa2978b640 / msgr-worker-0
  7ffa2b038280 / cephfs-journal-
  max_recent       500
  max_new         1000
  log_file /var/lib/ceph/crash/2024-08-08T09:29:27.083498Z_59307b7f-0bea-4795-b61b-00c25bfc4489/log
--- end dump of recent events ---
Segmentation fault (core dumped)

Comment 9 errata-xmlrpc 2025-06-26 12:14:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:9775


Note You need to log in before you can comment on or make changes to this bug.