Bug 2211477 - cephfs-journal-tool --rank with all option in confusing
Summary: cephfs-journal-tool --rank with all option in confusing
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 6.1
Hardware: All
OS: Unspecified
unspecified
low
Target Milestone: ---
: 6.1z2
Assignee: Venky Shankar
QA Contact: Amarnath
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2235257
TreeView+ depends on / blocked
 
Reported: 2023-05-31 18:05 UTC by Amarnath
Modified: 2023-11-03 04:01 UTC (History)
7 users (show)

Fixed In Version: ceph-17.2.6-130.el9cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-10-12 16:34:26 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 52149 0 None None None 2023-09-05 13:06:32 UTC
Red Hat Issue Tracker RHCEPH-6773 0 None None None 2023-05-31 18:06:59 UTC
Red Hat Product Errata RHSA-2023:5693 0 None None None 2023-10-12 16:36:17 UTC

Description Amarnath 2023-05-31 18:05:07 UTC
Description of problem:
cephfs-journal-tool --rank with all option not working

[root@ceph-amk-fs-tools-e05s2e-node7 ~]# cephfs-journal-tool --rank all journal export backup.bin
Error ((22) Invalid argument)
2023-05-31T13:57:37.356-0400 7faec14effc0 -1 main: Couldn't determine MDS rank.

if we pass --rank cephfs:0 it is creating backup.bin
[root@ceph-amk-fs-tools-e05s2e-node7 ~]# cephfs-journal-tool --rank cephfs:0 journal export backup.bin
journal is 157699724~236619391
wrote 236619391 bytes at offset 157699724 to backup.bin
NOTE: this is a _sparse_ file; you can
	$ tar cSzf backup.bin.tgz backup.bin
      to efficiently compress it while preserving sparseness.

[root@ceph-amk-fs-tools-e05s2e-node7 ~]# ls -lrt
total 231088
-rw-r--r--. 1 root root      2412 Jul 24  2015 run.sh
-rw-r--r--. 1 root root 394319115 May 31 13:55 backup.bin

Help Command Options

[root@ceph-amk-fs-tools-e05s2e-node7 opt]# cephfs-journal-tool -h
Usage: 
  cephfs-journal-tool [options] journal <command>
    <command>:
      inspect
      import <path> [--force]
      export <path>
      reset [--force]
  cephfs-journal-tool [options] header <get|set> <field> <value>
    <field>: [trimmed_pos|expire_pos|write_pos|pool_id]
  cephfs-journal-tool [options] event <effect> <selector> <output> [special options]
    <selector>:
      --range=<start>..<end>
      --path=<substring>
      --inode=<integer>
      --type=<UPDATE|OPEN|SESSION...><
      --frag=<ino>.<frag> [--dname=<dentry string>]
      --client=<session id integer>
    <effect>: [get|recover_dentries|splice]
    <output>: [summary|list|binary|json] [--path <path>]

General options:
  --rank=filesystem:mds-rank|all Journal rank (mandatory)
  --journal=<mdlog|purge_queue>  Journal type (purge_queue means
                                 this journal is used to queue for purge operation,
                                 default is mdlog, and only mdlog support event mode)

Special options
  --alternate-pool <name>     Alternative metadata pool to target
                              when using recover_dentries.
  --conf/-c FILE    read configuration from the given configuration file
  --id ID           set ID portion of my name
  --name/-n TYPE.ID set name
  --cluster NAME    set cluster name (default: ceph)
  --setuser USER    set uid to user or uid (and gid to user's gid)
  --setgroup GROUP  set gid to group or gid
  --version         show version and quit



Version-Release number of selected component (if applicable):

[root@ceph-amk-fs-tools-e05s2e-node7 ~]# ceph versions
{
    "mon": {
        "ceph version 17.2.6-70.el9cp (fe62dcdbb2c6e05782a3e2b67d025b84ff5047cc) quincy (stable)": 3
    },
    "mgr": {
        "ceph version 17.2.6-70.el9cp (fe62dcdbb2c6e05782a3e2b67d025b84ff5047cc) quincy (stable)": 2
    },
    "osd": {
        "ceph version 17.2.6-70.el9cp (fe62dcdbb2c6e05782a3e2b67d025b84ff5047cc) quincy (stable)": 12
    },
    "mds": {
        "ceph version 17.2.6-70.el9cp (fe62dcdbb2c6e05782a3e2b67d025b84ff5047cc) quincy (stable)": 3
    },
    "overall": {
        "ceph version 17.2.6-70.el9cp (fe62dcdbb2c6e05782a3e2b67d025b84ff5047cc) quincy (stable)": 20
    }
}


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Greg Farnum 2023-06-01 15:28:54 UTC
(In reply to Amarnath from comment #0)
>   --rank=filesystem:mds-rank|all Journal rank (mandatory)

Did you try with "--rank=all" instead of "--rank all"?

Comment 2 Amarnath 2023-06-04 18:34:06 UTC
Hi Greg, 

I have tried --rank=all it still shows invalid argument

[root@ceph-amk-test-2mpl4e-node9 ~]# cephfs-journal-tool --rank cephfs:0 journal export backup.bin
journal is 4194304~51521245
wrote 51521245 bytes at offset 4194304 to backup.bin
NOTE: this is a _sparse_ file; you can
	$ tar cSzf backup.bin.tgz backup.bin
      to efficiently compress it while preserving sparseness.
[root@ceph-amk-test-2mpl4e-node9 ~]# cephfs-journal-tool --rank=all journal export backup.bin
Error (2023-06-04T14:30:07.033-0400 7f9b66d3ff00 -1 main: Couldn't determine MDS rank.
(22) Invalid argument)
[root@ceph-amk-test-2mpl4e-node9 ~]# ls -lrt
total 50320
-rw-r--r--. 1 root root 55715549 Jun  4 14:29 backup.bin
[root@ceph-amk-test-2mpl4e-node9 ~]# 

Regards,
Amarnath

Comment 3 Greg Farnum 2023-06-06 14:44:10 UTC
Okay, I briefly looked at the code and git history here and I'm confused about how "all" is an option that the parsing system can handle, so leaving this for when Venky gets back to sort out.

Comment 8 Amarnath 2023-06-21 04:39:38 UTC
Hi Venky,

The below command works fine and collects the bin with rank in the postfix of the file name

[root@ceph-amk-enable-mds-hd0cgh-node7 ~]# cephfs-journal-tool --rank cephfs:all journal export backup.bin
journal is 4194304~43882
wrote 43882 bytes at offset 4194304 to backup.bin.0
NOTE: this is a _sparse_ file; you can
	$ tar cSzf backup.bin.0.tgz backup.bin.0
      to efficiently compress it while preserving sparseness.
journal is 4194304~4628
wrote 4628 bytes at offset 4194304 to backup.bin.1
NOTE: this is a _sparse_ file; you can
	$ tar cSzf backup.bin.1.tgz backup.bin.1
      to efficiently compress it while preserving sparseness.
[root@ceph-amk-enable-mds-hd0cgh-node7 ~]# ls -lrt
total 60
-rw-r--r--. 1 root root 4238186 Jun 21 00:34 backup.bin.0
-rw-r--r--. 1 root root 4198932 Jun 21 00:34 backup.bin.1
[root@ceph-amk-enable-mds-hd0cgh-node7 ~]# 

As Greg Mentioned,--rank=filesystem:{mds-rank|all} Will be helpful if it is updated in help option

Hi Venky,
Should we track above change in this bug or should i create other bug.

Regards,
Amaranth

Comment 11 Amarnath 2023-08-31 05:52:05 UTC
Hi Greg,

Looks like the need info flag was added to my name by mistake.
Is there anything I need to address?

Regards,
Amar

Comment 13 Greg Farnum 2023-09-05 13:37:20 UTC
(In reply to Amarnath from comment #11)
> Hi Greg,
> 
> Looks like the need info flag was added to my name by mistake.
> Is there anything I need to address?
> 
> Regards,
> Amar

Sorry, looks like I hit "qa contact" instead of "assignee". :)

Comment 18 Amarnath 2023-09-11 10:07:23 UTC
Hi All,
After updating the help command it looks clean.

Tested it

[root@ceph-fs-6-bz-gebvp1-node7 ~]# cephfs-journal-tool -h
Usage: 
  cephfs-journal-tool [options] journal <command>
    <command>:
      inspect
      import <path> [--force]
      export <path>
      reset [--force]
  cephfs-journal-tool [options] header <get|set> <field> <value>
    <field>: [trimmed_pos|expire_pos|write_pos|pool_id]
  cephfs-journal-tool [options] event <effect> <selector> <output> [special options]
    <selector>:
      --range=<start>..<end>
      --path=<substring>
      --inode=<integer>
      --type=<UPDATE|OPEN|SESSION...><
      --frag=<ino>.<frag> [--dname=<dentry string>]
      --client=<session id integer>
    <effect>: [get|recover_dentries|splice]
    <output>: [summary|list|binary|json] [--path <path>]

General options:
  --rank=filesystem:{mds-rank|all} journal rank or "all" ranks (mandatory)
  --journal=<mdlog|purge_queue>  Journal type (purge_queue means
                                 this journal is used to queue for purge operation,
                                 default is mdlog, and only mdlog support event mode)

Special options
  --alternate-pool <name>     Alternative metadata pool to target
                              when using recover_dentries.
  --conf/-c FILE    read configuration from the given configuration file
  --id ID           set ID portion of my name
  --name/-n TYPE.ID set name
  --cluster NAME    set cluster name (default: ceph)
  --setuser USER    set uid to user or uid (and gid to user's gid)
  --setgroup GROUP  set gid to group or gid
  --version         show version and quit

[root@ceph-fs-6-bz-gebvp1-node7 ~]# ceph version
ceph version 17.2.6-136.el9cp (24003d91a44631e46f9397ab1b9c5b77dc9223bc) quincy (stable)
[root@ceph-fs-6-bz-gebvp1-node7 ~]# cephfs-journal-tool --rank=cephfs:0 journal export backup.bin
journal is 102562940~243368261
wrote 243368261 bytes at offset 102562940 to backup.bin
NOTE: this is a _sparse_ file; you can
	$ tar cSzf backup.bin.tgz backup.bin
      to efficiently compress it while preserving sparseness.
[root@ceph-fs-6-bz-gebvp1-node7 ~]# ls -lrt
total 237672
-rw-r--r--. 1 root root 345931201 Sep 11 06:04 backup.bin
[root@ceph-fs-6-bz-gebvp1-node7 ~]# cephfs-journal-tool --rank=cephfs:all journal export backup.bin
journal is 102562940~243368261
wrote 243368261 bytes at offset 102562940 to backup.bin.0
NOTE: this is a _sparse_ file; you can
	$ tar cSzf backup.bin.0.tgz backup.bin.0
      to efficiently compress it while preserving sparseness.
journal is 4194304~160274358
wrote 160274358 bytes at offset 4194304 to backup.bin.1
NOTE: this is a _sparse_ file; you can
	$ tar cSzf backup.bin.1.tgz backup.bin.1
      to efficiently compress it while preserving sparseness.
[root@ceph-fs-6-bz-gebvp1-node7 ~]# ls -lrt
total 631868
-rw-r--r--. 1 root root 345931201 Sep 11 06:04 backup.bin
-rw-r--r--. 1 root root 345931201 Sep 11 06:05 backup.bin.0
-rw-r--r--. 1 root root 164468662 Sep 11 06:05 backup.bin.1
[root@ceph-fs-6-bz-gebvp1-node7 ~]# ceph fs status
cephfs - 2 clients
======
RANK  STATE                     MDS                       ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  cephfs.ceph-fs-6-bz-gebvp1-node4.dipxkm  Reqs:    0 /s  46.2k  46.2k  7204   34.0k  
 1    active  cephfs.ceph-fs-6-bz-gebvp1-node6.kgmbmg  Reqs:    0 /s  14.2k  14.2k  3222   14.1k  
       POOL           TYPE     USED  AVAIL  
cephfs.cephfs.meta  metadata  1160M  54.0G  
cephfs.cephfs.data    data    3586M  54.0G  
              STANDBY MDS                
cephfs.ceph-fs-6-bz-gebvp1-node5.kbimfq  
MDS version: ceph version 17.2.6-136.el9cp (24003d91a44631e46f9397ab1b9c5b77dc9223bc) quincy (stable)
[root@ceph-fs-6-bz-gebvp1-node7 ~]# ceph versions
{
    "mon": {
        "ceph version 17.2.6-136.el9cp (24003d91a44631e46f9397ab1b9c5b77dc9223bc) quincy (stable)": 3
    },
    "mgr": {
        "ceph version 17.2.6-136.el9cp (24003d91a44631e46f9397ab1b9c5b77dc9223bc) quincy (stable)": 2
    },
    "osd": {
        "ceph version 17.2.6-136.el9cp (24003d91a44631e46f9397ab1b9c5b77dc9223bc) quincy (stable)": 12
    },
    "mds": {
        "ceph version 17.2.6-136.el9cp (24003d91a44631e46f9397ab1b9c5b77dc9223bc) quincy (stable)": 3
    },
    "overall": {
        "ceph version 17.2.6-136.el9cp (24003d91a44631e46f9397ab1b9c5b77dc9223bc) quincy (stable)": 20
    }
}
[root@ceph-fs-6-bz-gebvp1-node7 ~]# 

Regards,
Amarnath

Comment 22 errata-xmlrpc 2023-10-12 16:34:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security, enhancement, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:5693


Note You need to log in before you can comment on or make changes to this bug.