Description of problem: Ceph fs volume create command is failing [ceph: root@ceph-amk-metrics-wgmve5-node1-installer /]# ceph fs volume create cephfs no valid command found; 10 closest matches: fs snapshot mirror enable [<fs_name>] fs snapshot mirror disable [<fs_name>] fs snapshot mirror peer_add <fs_name> [<remote_cluster_spec>] [<remote_fs_name>] [<remote_mon_host>] [<cephx_key>] fs snapshot mirror peer_list [<fs_name>] fs snapshot mirror peer_remove <fs_name> [<peer_uuid>] fs snapshot mirror peer_bootstrap create <fs_name> <client_name> [<site_name>] fs snapshot mirror peer_bootstrap import <fs_name> [<token>] fs snapshot mirror add <fs_name> [<path>] fs snapshot mirror remove <fs_name> [<path>] fs snapshot mirror ls [<fs_name>] Error EINVAL: invalid command [ceph: root@ceph-amk-metrics-wgmve5-node1-installer /]# ceph health reporting [ceph: root@ceph-amk-metrics-wgmve5-node1-installer /]# ceph health detail HEALTH_WARN Module 'volumes' has failed dependency: invalid syntax (module.py, line 150) [WRN] MGR_MODULE_DEPENDENCY: Module 'volumes' has failed dependency: invalid syntax (module.py, line 150) Module 'volumes' has failed dependency: invalid syntax (module.py, line 150) [ceph: root@ceph-amk-metrics-wgmve5-node1-installer /]# Version-Release number of selected component (if applicable): [ceph: root@ceph-amk-metrics-wgmve5-node1-installer /]# ceph versions { "mon": { "ceph version 19.2.1-171.el9cp (0569e8bad6f749e8b090c3d2304c5962235927c6) squid (stable)": 3 }, "mgr": { "ceph version 19.2.1-171.el9cp (0569e8bad6f749e8b090c3d2304c5962235927c6) squid (stable)": 2 }, "osd": { "ceph version 19.2.1-171.el9cp (0569e8bad6f749e8b090c3d2304c5962235927c6) squid (stable)": 16 }, "overall": { "ceph version 19.2.1-171.el9cp (0569e8bad6f749e8b090c3d2304c5962235927c6) squid (stable)": 21 } } [ceph: root@ceph-amk-metrics-wgmve5-node1-installer /]# How reproducible: 1/1 Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2025:9775