Description of problem: Upgrade of nvmeof daemons failed from Ceph version 18.2.0-65.el9cp to 18.2.0-71.el9cp with below status >>> >>>4 ceph health detail >>>[WRN] UPGRADE_REDEPLOY_DAEMON: Upgrading daemon nvmeof.rbd.ceph-2sunilkumar-nhs96b-node6.zpkhmf on host ceph-2sunilkumar-nhs96b-node6 failed. >>> Upgrade daemon: nvmeof.rbd.ceph-2sunilkumar-nhs96b-node6.zpkhmf: Cannot redeploy nvmeof.rbd.ceph-2sunilkumar-nhs96b-node6.zpkhmf with a new image: Supported types are: mgr, mon, crash, osd, mds, rgw, rbd-mirror, cephfs-mirror, ceph-exporter, iscsi, nfs >>>2023-10-03T14:44:20.503+0000 7f9561f20640 0 log_channel(cluster) log [WRN] : [WRN] UPGRADE_REDEPLOY_DAEMON: Upgrading daemon nvmeof.rbd.ceph-2sunilkumar-nhs96b-node6.zpkhmf on host ceph-2sunilkumar-nhs96b-node6 failed. >>>2023-10-03T14:44:20.503+0000 7f9561f20640 0 log_channel(cluster) log [WRN] : Upgrade daemon: nvmeof.rbd.ceph-2sunilkumar-nhs96b-node6.zpkhmf: Cannot redeploy nvmeof.rbd.ceph-2sunilkumar-nhs96b-node6.zpkhmf with a new image: Supported types are: mgr, mon, crash, osd, mds, rgw, rbd-mirror, cephfs-mirror, ceph-exporter, iscsi, nfs >>> >>>[ceph: root@ceph-2sunilkumar-nhs96b-node1-installer /]# ceph orch upgrade status >>>{ >>> "target_image": "registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:0fbf7dda6349c01d62584f51d71fe6a3214f86f4d9a4bf478a9c8d209509069e", >>> "in_progress": true, >>> "which": "Upgrading all daemon types on all hosts", >>> "services_complete": [ >>> "mgr", >>> "ceph-exporter", >>> "mon", >>> "crash", >>> "osd" >>> ], >>> "progress": "29/39 daemons upgraded", >>> "message": "Error: UPGRADE_REDEPLOY_DAEMON: Upgrading daemon nvmeof.rbd.ceph-2sunilkumar-nhs96b-node6.zpkhmf on host ceph-2sunilkumar-nhs96b-node6 failed.", >>> "is_paused": true >>>} >>> >>>cluster details >>>----------------- >>># cat /etc/hosts >>>127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 >>>::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 >>>10.0.211.104 ceph-2sunilkumar-nhs96b-node1-installer ceph-2sunilkumar-nhs96b-node1-installer <---- bootstrap node >>>10.0.208.145 ceph-2sunilkumar-nhs96b-node2 ceph-2sunilkumar-nhs96b-node2 >>>10.0.209.236 ceph-2sunilkumar-nhs96b-node3 ceph-2sunilkumar-nhs96b-node3 >>>10.0.211.109 ceph-2sunilkumar-nhs96b-node4 ceph-2sunilkumar-nhs96b-node4 >>>10.0.210.76 ceph-2sunilkumar-nhs96b-node5 ceph-2sunilkumar-nhs96b-node5 >>>10.0.209.201 ceph-2sunilkumar-nhs96b-node6 ceph-2sunilkumar-nhs96b-node6 <------ GW node >>>10.0.208.183 ceph-2sunilkumar-nhs96b-node7 ceph-2sunilkumar-nhs96b-node7 >>> >>>Cluster node can be used for debugging (creds: cephuser/cephuser) Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
marking it as failed now, since unable to proceed with upgrade due to this bug, https://bugzilla.redhat.com/show_bug.cgi?id=2246177
After applying workaround mentioned in the BZ-2246177, Upgrade started and completed successfully. [ceph: root@cali001 /]# ceph orch upgrade start --image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-7.0-rhel-9-containers-candidate-27036-20231025004606 >>>Initiating upgrade to registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-7.0-rhel-9-containers-candidate-27036-20231025004606 >>>[ceph: root@cali001 /]# >>>[ceph: root@cali001 /]# ceph orch upgrade status >>>{ >>> "target_image": "registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:5e7eaa2fde8e04f256838c73be925b43b31866640795bcbe69a319d345ab5433", >>> "in_progress": true, >>> "which": "Upgrading all daemon types on all hosts", >>> "services_complete": [], >>> "progress": "1/59 daemons upgraded", >>> "message": "", >>> "is_paused": false >>>} >>> >>>[ceph: root@cali001 /]# ceph orch upgrade status >>>{ >>> "target_image": "registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:5e7eaa2fde8e04f256838c73be925b43b31866640795bcbe69a319d345ab5433", >>> "in_progress": true, >>> "which": "Upgrading all daemon types on all hosts", >>> "services_complete": [], >>> "progress": "1/59 daemons upgraded", >>> "message": "", >>> "is_paused": false >>>} >>> >>> >>>[ceph: root@cali001 /]# ceph orch ls >>>NAME PORTS RUNNING REFRESHED AGE PLACEMENT >>>alertmanager ?:9093,9094 1/1 21s ago 2w count:1 >>>ceph-exporter 5/5 37s ago 2w * >>>crash 5/5 37s ago 2w * >>>grafana ?:3000 1/1 13s ago 2w count:1 >>>mgr 2/2 37s ago 2w label:mgr >>>mon 3/3 37s ago 2w label:mon >>>node-exporter ?:9100 5/5 37s ago 2w * >>>nvmeof.rbd ?:4420,5500,8009 1/1 13s ago 13h cali010 >>>osd.all-available-devices 35 37s ago 2w * >>>prometheus ?:9095 1/1 21s ago 2w count:1 >>>[ceph: root@cali001 /]# ceph orch ps >>>NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID >>>alertmanager.cali001 cali001 *:9093,9094 running (28s) 26s ago 2w 30.3M - 0.24.0 a1e0405d9439 a9e1bbe4e0cd >>>ceph-exporter.cali001 cali001 running (77s) 26s ago 2w 7986k - 18.2.0-99.el9cp 138c5b437132 305c9a6e22c3 >>>ceph-exporter.cali004 cali004 running (74s) 42s ago 2w 7952k - 18.2.0-99.el9cp 138c5b437132 9a87f9ddee21 >>>ceph-exporter.cali005 cali005 running (72s) 42s ago 2w 7931k - 18.2.0-99.el9cp 138c5b437132 0083c59b81e0 >>>ceph-exporter.cali008 cali008 running (70s) 33s ago 2w 7810k - 18.2.0-99.el9cp 138c5b437132 c619b95582b5 >>>ceph-exporter.cali010 cali010 running (68s) 18s ago 2w 7810k - 18.2.0-99.el9cp 138c5b437132 6565f9d9da71 >>>crash.cali001 cali001 running (7m) 26s ago 2w 6891k - 18.2.0-99.el9cp 138c5b437132 fd84816113cd >>>crash.cali004 cali004 running (7m) 42s ago 2w 6891k - 18.2.0-99.el9cp 138c5b437132 c7f4539722cf >>>crash.cali005 cali005 running (7m) 42s ago 2w 6891k - 18.2.0-99.el9cp 138c5b437132 3cfca885b8d4 >>>crash.cali008 cali008 running (7m) 33s ago 2w 6887k - 18.2.0-99.el9cp 138c5b437132 cad7cc25c6cc >>>crash.cali010 cali010 running (6m) 18s ago 2w 6899k - 18.2.0-99.el9cp 138c5b437132 8a91beb80b8d >>>grafana.cali010 cali010 *:3000 running (20s) 18s ago 13h 88.4M - 9.4.7 4a8e72d52594 e5d75f443260 >>>mgr.cali001.joixou cali001 *:8443,9283,8765 running (8m) 26s ago 2w 430M - 18.2.0-99.el9cp 138c5b437132 66a0859b536f >>>mgr.cali004.jcadqz cali004 *:8443,9283,8765 running (8m) 42s ago 2w 512M - 18.2.0-99.el9cp 138c5b437132 a598f2c8dfeb >>>mon.cali001 cali001 running (8m) 26s ago 2w 85.1M 2048M 18.2.0-99.el9cp 138c5b437132 d3afbd3ac1c9 >>>mon.cali004 cali004 running (8m) 42s ago 2w 71.6M 2048M 18.2.0-99.el9cp 138c5b437132 ea796f7d7072 >>>mon.cali005 cali005 running (7m) 42s ago 2w 69.7M 2048M 18.2.0-99.el9cp 138c5b437132 de8635a52c2b >>>node-exporter.cali001 cali001 *:9100 running (52s) 26s ago 2w 68.2M - 1.4.0 925b10dd3bb0 e64d0f57578f >>>node-exporter.cali004 cali004 *:9100 running (50s) 42s ago 2w 67.5M - 1.4.0 925b10dd3bb0 fd4e4eca500d >>>node-exporter.cali005 cali005 *:9100 running (47s) 42s ago 2w 5578k - 1.4.0 925b10dd3bb0 79b7c4ec4bca >>>node-exporter.cali008 cali008 *:9100 running (45s) 33s ago 2w 64.5M - 1.4.0 925b10dd3bb0 c0be512a9f87 >>>node-exporter.cali010 cali010 *:9100 running (43s) 18s ago 2w 63.7M - 1.4.0 925b10dd3bb0 0a275e111a09 >>>nvmeof.rbd.cali010.egzbsx cali010 *:5500,4420,8009 running (59s) 18s ago 13h 56.5M - 69c2cf6e1104 98a98e7d29bc >>>osd.0 cali008 running (2m) 33s ago 2w 315M 12.1G 18.2.0-99.el9cp 138c5b437132 73d5b1d80767 >>>osd.1 cali005 running (5m) 42s ago 2w 54.8M 12.0G 18.2.0-99.el9cp 138c5b437132 3976f952df3e >>>osd.2 cali004 running (5m) 42s ago 2w 445M 11.4G 18.2.0-99.el9cp 138c5b437132 94d18efe80c8 >>>osd.3 cali010 running (100s) 18s ago 2w 543M 11.8G 18.2.0-99.el9cp 138c5b437132 fde9c5d5be98 >>>osd.4 cali001 running (6m) 26s ago 2w 714M 11.1G 18.2.0-99.el9cp 138c5b437132 855c3f4207ed >>>osd.5 cali005 running (3m) 42s ago 2w 879M 12.0G 18.2.0-99.el9cp 138c5b437132 bf9955167c74 >>>osd.6 cali008 running (5m) 33s ago 2w 234M 12.1G 18.2.0-99.el9cp 138c5b437132 8d0fce5b1647 >>>osd.7 cali004 running (4m) 42s ago 2w 847M 11.4G 18.2.0-99.el9cp 138c5b437132 ac1691cf8130 >>>osd.8 cali010 running (93s) 18s ago 2w 311M 11.8G 18.2.0-99.el9cp 138c5b437132 1d902ecfdede >>>osd.9 cali001 running (6m) 26s ago 2w 1328M 11.1G 18.2.0-99.el9cp 138c5b437132 50086e7b6be9 >>>osd.10 cali008 running (2m) 33s ago 2w 538M 12.1G 18.2.0-99.el9cp 138c5b437132 2d5c88fcde50 >>>osd.11 cali004 running (4m) 42s ago 2w 550M 11.4G 18.2.0-99.el9cp 138c5b437132 10f49d80e6db >>>osd.12 cali005 running (3m) 42s ago 2w 987M 12.0G 18.2.0-99.el9cp 138c5b437132 dfbd74415829 >>>osd.13 cali010 running (3m) 18s ago 2w 486M 11.8G 18.2.0-99.el9cp 138c5b437132 fa550cdbc471 >>>osd.14 cali001 running (6m) 26s ago 2w 498M 11.1G 18.2.0-99.el9cp 138c5b437132 d24df0bcbb4a >>>osd.15 cali004 running (5m) 42s ago 2w 1175M 11.4G 18.2.0-99.el9cp 138c5b437132 4f5c32426133 >>>osd.16 cali005 running (4m) 42s ago 2w 550M 12.0G 18.2.0-99.el9cp 138c5b437132 77d87ee03f21 >>>osd.17 cali001 running (6m) 26s ago 2w 448M 11.1G 18.2.0-99.el9cp 138c5b437132 dfdb088c96af >>>osd.18 cali008 running (2m) 33s ago 2w 446M 12.1G 18.2.0-99.el9cp 138c5b437132 c41d32c2af39 >>>osd.19 cali010 running (85s) 18s ago 2w 382M 11.8G 18.2.0-99.el9cp 138c5b437132 b1fb2e71e46e >>>osd.20 cali004 running (5m) 42s ago 2w 653M 11.4G 18.2.0-99.el9cp 138c5b437132 0965be5deb03 >>>osd.21 cali005 running (5m) 42s ago 2w 387M 12.0G 18.2.0-99.el9cp 138c5b437132 82f711a1cbde >>>osd.22 cali010 running (4m) 18s ago 2w 362M 11.8G 18.2.0-99.el9cp 138c5b437132 43c5c717c3c1 >>>osd.23 cali001 running (6m) 26s ago 2w 1055M 11.1G 18.2.0-99.el9cp 138c5b437132 cf0d803552bc >>>osd.24 cali008 running (5m) 33s ago 2w 267M 12.1G 18.2.0-99.el9cp 138c5b437132 073ac4086914 >>>osd.25 cali004 running (5m) 42s ago 2w 86.7M 11.4G 18.2.0-99.el9cp 138c5b437132 9a6d14f0015b >>>osd.26 cali008 running (2m) 33s ago 2w 891M 12.1G 18.2.0-99.el9cp 138c5b437132 eac24fd6b1d6 >>>osd.27 cali005 running (3m) 42s ago 2w 897M 12.0G 18.2.0-99.el9cp 138c5b437132 72fab05d627d >>>osd.28 cali010 running (3m) 18s ago 2w 268M 11.8G 18.2.0-99.el9cp 138c5b437132 efa597464fee >>>osd.29 cali001 running (6m) 26s ago 2w 745M 11.1G 18.2.0-99.el9cp 138c5b437132 606ff7fd3f8f >>>osd.30 cali004 running (4m) 42s ago 2w 1034M 11.4G 18.2.0-99.el9cp 138c5b437132 6888b489e2bf >>>osd.31 cali008 running (2m) 33s ago 2w 655M 12.1G 18.2.0-99.el9cp 138c5b437132 d2493958927d >>>osd.32 cali005 running (3m) 42s ago 2w 629M 12.0G 18.2.0-99.el9cp 138c5b437132 215e8fd46082 >>>osd.33 cali010 running (4m) 18s ago 2w 801M 11.8G 18.2.0-99.el9cp 138c5b437132 42c420e29a68 >>>osd.34 cali001 running (6m) 26s ago 2w 386M 11.1G 18.2.0-99.el9cp 138c5b437132 e8af5dd985f7 >>>prometheus.cali001 cali001 *:9095 running (35s) 26s ago 2w 145M - 2.39.1 657ac6fe7b15 b9a7aa49e881 >>>[ceph: root@cali001 /]# >>>[ceph: root@cali001 /]# ceph orch ps >>>NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID >>>alertmanager.cali001 cali001 *:9093,9094 running (63s) 61s ago 2w 30.3M - 0.24.0 a1e0405d9439 a9e1bbe4e0cd >>>ceph-exporter.cali001 cali001 running (112s) 61s ago 2w 7986k - 18.2.0-99.el9cp 138c5b437132 305c9a6e22c3 >>>ceph-exporter.cali004 cali004 running (109s) 77s ago 2w 7952k - 18.2.0-99.el9cp 138c5b437132 9a87f9ddee21 >>>ceph-exporter.cali005 cali005 running (107s) 77s ago 2w 7931k - 18.2.0-99.el9cp 138c5b437132 0083c59b81e0 >>>ceph-exporter.cali008 cali008 running (105s) 68s ago 2w 7810k - 18.2.0-99.el9cp 138c5b437132 c619b95582b5 >>>ceph-exporter.cali010 cali010 running (103s) 53s ago 2w 7810k - 18.2.0-99.el9cp 138c5b437132 6565f9d9da71 >>>crash.cali001 cali001 running (8m) 61s ago 2w 6891k - 18.2.0-99.el9cp 138c5b437132 fd84816113cd >>>crash.cali004 cali004 running (8m) 77s ago 2w 6891k - 18.2.0-99.el9cp 138c5b437132 c7f4539722cf >>>crash.cali005 cali005 running (8m) 77s ago 2w 6891k - 18.2.0-99.el9cp 138c5b437132 3cfca885b8d4 >>>crash.cali008 cali008 running (7m) 68s ago 2w 6887k - 18.2.0-99.el9cp 138c5b437132 cad7cc25c6cc >>>crash.cali010 cali010 running (7m) 53s ago 2w 6899k - 18.2.0-99.el9cp 138c5b437132 8a91beb80b8d >>>grafana.cali010 cali010 *:3000 running (55s) 53s ago 13h 88.4M - 9.4.7 4a8e72d52594 e5d75f443260 >>>mgr.cali001.joixou cali001 *:8443,9283,8765 running (9m) 61s ago 2w 430M - 18.2.0-99.el9cp 138c5b437132 66a0859b536f >>>mgr.cali004.jcadqz cali004 *:8443,9283,8765 running (9m) 77s ago 2w 512M - 18.2.0-99.el9cp 138c5b437132 a598f2c8dfeb >>>mon.cali001 cali001 running (8m) 61s ago 2w 85.1M 2048M 18.2.0-99.el9cp 138c5b437132 d3afbd3ac1c9 >>>mon.cali004 cali004 running (8m) 77s ago 2w 71.6M 2048M 18.2.0-99.el9cp 138c5b437132 ea796f7d7072 >>>mon.cali005 cali005 running (8m) 77s ago 2w 69.7M 2048M 18.2.0-99.el9cp 138c5b437132 de8635a52c2b >>>node-exporter.cali001 cali001 *:9100 running (87s) 61s ago 2w 68.2M - 1.4.0 925b10dd3bb0 e64d0f57578f >>>node-exporter.cali004 cali004 *:9100 running (85s) 77s ago 2w 67.5M - 1.4.0 925b10dd3bb0 fd4e4eca500d >>>node-exporter.cali005 cali005 *:9100 running (82s) 77s ago 2w 5578k - 1.4.0 925b10dd3bb0 79b7c4ec4bca >>>node-exporter.cali008 cali008 *:9100 running (80s) 68s ago 2w 64.5M - 1.4.0 925b10dd3bb0 c0be512a9f87 >>>node-exporter.cali010 cali010 *:9100 running (78s) 53s ago 2w 63.7M - 1.4.0 925b10dd3bb0 0a275e111a09 >>>nvmeof.rbd.cali010.egzbsx cali010 *:5500,4420,8009 running (94s) 53s ago 13h 56.5M - 69c2cf6e1104 98a98e7d29bc >>>osd.0 cali008 running (3m) 68s ago 2w 315M 12.1G 18.2.0-99.el9cp 138c5b437132 73d5b1d80767 >>>osd.1 cali005 running (6m) 77s ago 2w 54.8M 12.0G 18.2.0-99.el9cp 138c5b437132 3976f952df3e >>>osd.2 cali004 running (5m) 77s ago 2w 445M 11.4G 18.2.0-99.el9cp 138c5b437132 94d18efe80c8 >>>osd.3 cali010 running (2m) 53s ago 2w 543M 11.8G 18.2.0-99.el9cp 138c5b437132 fde9c5d5be98 >>>osd.4 cali001 running (7m) 61s ago 2w 714M 11.1G 18.2.0-99.el9cp 138c5b437132 855c3f4207ed >>>osd.5 cali005 running (4m) 77s ago 2w 879M 12.0G 18.2.0-99.el9cp 138c5b437132 bf9955167c74 >>>osd.6 cali008 running (6m) 68s ago 2w 234M 12.1G 18.2.0-99.el9cp 138c5b437132 8d0fce5b1647 >>>osd.7 cali004 running (5m) 77s ago 2w 847M 11.4G 18.2.0-99.el9cp 138c5b437132 ac1691cf8130 >>>osd.8 cali010 running (2m) 53s ago 2w 311M 11.8G 18.2.0-99.el9cp 138c5b437132 1d902ecfdede >>>osd.9 cali001 running (6m) 61s ago 2w 1328M 11.1G 18.2.0-99.el9cp 138c5b437132 50086e7b6be9 >>>osd.10 cali008 running (2m) 68s ago 2w 538M 12.1G 18.2.0-99.el9cp 138c5b437132 2d5c88fcde50 >>>osd.11 cali004 running (5m) 77s ago 2w 550M 11.4G 18.2.0-99.el9cp 138c5b437132 10f49d80e6db >>>osd.12 cali005 running (4m) 77s ago 2w 987M 12.0G 18.2.0-99.el9cp 138c5b437132 dfbd74415829 >>>osd.13 cali010 running (3m) 53s ago 2w 486M 11.8G 18.2.0-99.el9cp 138c5b437132 fa550cdbc471 >>>osd.14 cali001 running (7m) 61s ago 2w 498M 11.1G 18.2.0-99.el9cp 138c5b437132 d24df0bcbb4a >>>osd.15 cali004 running (5m) 77s ago 2w 1175M 11.4G 18.2.0-99.el9cp 138c5b437132 4f5c32426133 >>>osd.16 cali005 running (5m) 77s ago 2w 550M 12.0G 18.2.0-99.el9cp 138c5b437132 77d87ee03f21 >>>osd.17 cali001 running (6m) 61s ago 2w 448M 11.1G 18.2.0-99.el9cp 138c5b437132 dfdb088c96af >>>osd.18 cali008 running (3m) 68s ago 2w 446M 12.1G 18.2.0-99.el9cp 138c5b437132 c41d32c2af39 >>>osd.19 cali010 running (2m) 53s ago 2w 382M 11.8G 18.2.0-99.el9cp 138c5b437132 b1fb2e71e46e >>>osd.20 cali004 running (5m) 77s ago 2w 653M 11.4G 18.2.0-99.el9cp 138c5b437132 0965be5deb03 >>>osd.21 cali005 running (6m) 77s ago 2w 387M 12.0G 18.2.0-99.el9cp 138c5b437132 82f711a1cbde >>>osd.22 cali010 running (4m) 53s ago 2w 362M 11.8G 18.2.0-99.el9cp 138c5b437132 43c5c717c3c1 >>>osd.23 cali001 running (6m) 61s ago 2w 1055M 11.1G 18.2.0-99.el9cp 138c5b437132 cf0d803552bc >>>osd.24 cali008 running (6m) 68s ago 2w 267M 12.1G 18.2.0-99.el9cp 138c5b437132 073ac4086914 >>>osd.25 cali004 running (6m) 77s ago 2w 86.7M 11.4G 18.2.0-99.el9cp 138c5b437132 9a6d14f0015b >>>osd.26 cali008 running (3m) 68s ago 2w 891M 12.1G 18.2.0-99.el9cp 138c5b437132 eac24fd6b1d6 >>>osd.27 cali005 running (4m) 77s ago 2w 897M 12.0G 18.2.0-99.el9cp 138c5b437132 72fab05d627d >>>osd.28 cali010 running (3m) 53s ago 2w 268M 11.8G 18.2.0-99.el9cp 138c5b437132 efa597464fee >>>osd.29 cali001 running (7m) 61s ago 2w 745M 11.1G 18.2.0-99.el9cp 138c5b437132 606ff7fd3f8f >>>osd.30 cali004 running (5m) 77s ago 2w 1034M 11.4G 18.2.0-99.el9cp 138c5b437132 6888b489e2bf >>>osd.31 cali008 running (3m) 68s ago 2w 655M 12.1G 18.2.0-99.el9cp 138c5b437132 d2493958927d >>>osd.32 cali005 running (3m) 77s ago 2w 629M 12.0G 18.2.0-99.el9cp 138c5b437132 215e8fd46082 >>>osd.33 cali010 running (4m) 53s ago 2w 801M 11.8G 18.2.0-99.el9cp 138c5b437132 42c420e29a68 >>>osd.34 cali001 running (6m) 61s ago 2w 386M 11.1G 18.2.0-99.el9cp 138c5b437132 e8af5dd985f7 >>>prometheus.cali001 cali001 *:9095 running (70s) 61s ago 2w 145M - 2.39.1 657ac6fe7b15 b9a7aa49e881 >>>[ceph: root@cali001 /]# >>>[ceph: root@cali001 /]# ceph orch ps | grep nvme >>>nvmeof.rbd.cali010.egzbsx cali010 *:5500,4420,8009 running (106s) 64s ago 13h 56.5M - 69c2cf6e1104 98a98e7d29bc >>>[ceph: root@cali001 /]# rados -p rbd listomapvals nvmeof.None.state >>>bdev_bdev1 >>>value (154 bytes) : >>>00000000 7b 0a 20 20 22 62 64 65 76 5f 6e 61 6d 65 22 3a |{. "bdev_name":| >>>00000010 20 22 62 64 65 76 31 22 2c 0a 20 20 22 72 62 64 | "bdev1",. "rbd| >>>00000020 5f 70 6f 6f 6c 5f 6e 61 6d 65 22 3a 20 22 72 62 |_pool_name": "rb| >>>00000030 64 22 2c 0a 20 20 22 72 62 64 5f 69 6d 61 67 65 |d",. "rbd_image| >>>00000040 5f 6e 61 6d 65 22 3a 20 22 69 6d 61 67 65 31 22 |_name": "image1"| >>>00000050 2c 0a 20 20 22 62 6c 6f 63 6b 5f 73 69 7a 65 22 |,. "block_size"| >>>00000060 3a 20 34 30 39 36 2c 0a 20 20 22 75 75 69 64 22 |: 4096,. "uuid"| >>>00000070 3a 20 22 39 33 34 32 66 37 65 34 2d 65 37 62 66 |: "9342f7e4-e7bf| >>>00000080 2d 34 32 30 36 2d 39 35 64 65 2d 33 30 39 32 64 |-4206-95de-3092d| >>>00000090 62 39 61 36 32 37 62 22 0a 7d |b9a627b".}| >>>0000009abdev_bdev2 >>>value (154 bytes) : >>>00000000 7b 0a 20 20 22 62 64 65 76 5f 6e 61 6d 65 22 3a |{. "bdev_name":| >>>00000010 20 22 62 64 65 76 32 22 2c 0a 20 20 22 72 62 64 | "bdev2",. "rbd| >>>00000020 5f 70 6f 6f 6c 5f 6e 61 6d 65 22 3a 20 22 72 62 |_pool_name": "rb| >>>00000030 64 22 2c 0a 20 20 22 72 62 64 5f 69 6d 61 67 65 |d",. "rbd_image| >>>00000040 5f 6e 61 6d 65 22 3a 20 22 69 6d 61 67 65 32 22 |_name": "image2"| >>>00000050 2c 0a 20 20 22 62 6c 6f 63 6b 5f 73 69 7a 65 22 |,. "block_size"| >>>00000060 3a 20 34 30 39 36 2c 0a 20 20 22 75 75 69 64 22 |: 4096,. "uuid"| >>>00000070 3a 20 22 32 36 61 64 35 35 62 33 2d 36 35 36 31 |: "26ad55b3-6561| >>>00000080 2d 34 32 61 32 2d 38 62 66 30 2d 34 31 62 66 64 |-42a2-8bf0-41bfd| >>>00000090 63 63 66 62 34 37 62 22 0a 7d |ccfb47b".}| >>>0000009ahost_nqn.2016-06.io.spdk:cnode2_* >>>value (70 bytes) : >>>00000000 7b 0a 20 20 22 73 75 62 73 79 73 74 65 6d 5f 6e |{. "subsystem_n| >>>00000010 71 6e 22 3a 20 22 6e 71 6e 2e 32 30 31 36 2d 30 |qn": "nqn.2016-0| >>>00000020 36 2e 69 6f 2e 73 70 64 6b 3a 63 6e 6f 64 65 32 |6.io.spdk:cnode2| >>>00000030 22 2c 0a 20 20 22 68 6f 73 74 5f 6e 71 6e 22 3a |",. "host_nqn":| >>>00000040 20 22 2a 22 0a 7d | "*".}| >>>00000046listener_nqn.2016-06.io.spdk:cnode2_client.nvmeof.rbd.cali010.egzbsx_TCP_10.8.130.10_5002 >>>value (182 bytes) : >>>00000000 7b 0a 20 20 22 6e 71 6e 22 3a 20 22 6e 71 6e 2e |{. "nqn": "nqn.| >>>00000010 32 30 31 36 2d 30 36 2e 69 6f 2e 73 70 64 6b 3a |2016-06.io.spdk:| >>>00000020 63 6e 6f 64 65 32 22 2c 0a 20 20 22 67 61 74 65 |cnode2",. "gate| >>>00000030 77 61 79 5f 6e 61 6d 65 22 3a 20 22 63 6c 69 65 |way_name": "clie| >>>00000040 6e 74 2e 6e 76 6d 65 6f 66 2e 72 62 64 2e 63 61 |nt.nvmeof.rbd.ca| >>>00000050 6c 69 30 31 30 2e 65 67 7a 62 73 78 22 2c 0a 20 |li010.egzbsx",. | >>>00000060 20 22 74 72 74 79 70 65 22 3a 20 22 54 43 50 22 | "trtype": "TCP"| >>>00000070 2c 0a 20 20 22 61 64 72 66 61 6d 22 3a 20 22 69 |,. "adrfam": "i| >>>00000080 70 76 34 22 2c 0a 20 20 22 74 72 61 64 64 72 22 |pv4",. "traddr"| >>>00000090 3a 20 22 31 30 2e 38 2e 31 33 30 2e 31 30 22 2c |: "10.8.130.10",| >>>000000a0 0a 20 20 22 74 72 73 76 63 69 64 22 3a 20 22 35 |. "trsvcid": "5| >>>000000b0 30 30 32 22 0a 7d |002".}| >>>000000b6namespace_nqn.2016-06.io.spdk:cnode2_1 >>>value (88 bytes) : >>>00000000 7b 0a 20 20 22 73 75 62 73 79 73 74 65 6d 5f 6e |{. "subsystem_n| >>>00000010 71 6e 22 3a 20 22 6e 71 6e 2e 32 30 31 36 2d 30 |qn": "nqn.2016-0| >>>00000020 36 2e 69 6f 2e 73 70 64 6b 3a 63 6e 6f 64 65 32 |6.io.spdk:cnode2| >>>00000030 22 2c 0a 20 20 22 62 64 65 76 5f 6e 61 6d 65 22 |",. "bdev_name"| >>>00000040 3a 20 22 62 64 65 76 31 22 2c 0a 20 20 22 6e 73 |: "bdev1",. "ns| >>>00000050 69 64 22 3a 20 31 0a 7d |id": 1.}| >>>00000058namespace_nqn.2016-06.io.spdk:cnode2_2 >>>value (88 bytes) : >>>00000000 7b 0a 20 20 22 73 75 62 73 79 73 74 65 6d 5f 6e |{. "subsystem_n| >>>00000010 71 6e 22 3a 20 22 6e 71 6e 2e 32 30 31 36 2d 30 |qn": "nqn.2016-0| >>>00000020 36 2e 69 6f 2e 73 70 64 6b 3a 63 6e 6f 64 65 32 |6.io.spdk:cnode2| >>>00000030 22 2c 0a 20 20 22 62 64 65 76 5f 6e 61 6d 65 22 |",. "bdev_name"| >>>00000040 3a 20 22 62 64 65 76 32 22 2c 0a 20 20 22 6e 73 |: "bdev2",. "ns| >>>00000050 69 64 22 3a 20 32 0a 7d |id": 2.}| >>>00000058omap_version >>>value (1 bytes) : >>>00000000 38 |8| >>>00000001subsystem_nqn.2016-06.io.spdk:cnode2 >>>value (99 bytes) : >>>00000000 7b 0a 20 20 22 73 75 62 73 79 73 74 65 6d 5f 6e |{. "subsystem_n| >>>00000010 71 6e 22 3a 20 22 6e 71 6e 2e 32 30 31 36 2d 30 |qn": "nqn.2016-0| >>>00000020 36 2e 69 6f 2e 73 70 64 6b 3a 63 6e 6f 64 65 32 |6.io.spdk:cnode2| >>>00000030 22 2c 0a 20 20 22 73 65 72 69 61 6c 5f 6e 75 6d |",. "serial_num| >>>00000040 62 65 72 22 3a 20 22 32 22 2c 0a 20 20 22 6d 61 |ber": "2",. "ma| >>>00000050 78 5f 6e 61 6d 65 73 70 61 63 65 73 22 3a 20 33 |x_namespaces": 3| >>>00000060 32 0a 7d |2.}| >>>00000063[ceph: root@cali001 /]# ceph orch ps | grep nvme >>>nvmeof.rbd.cali010.egzbsx cali010 *:5500,4420,8009 running (3m) 2m ago 13h 56.5M - 69c2cf6e1104 98a98e7d29bc >>>[ceph: root@cali001 /]# rados -p rbd listomapkeys nvmeof.None.state >>>bdev_bdev1 >>>bdev_bdev2 >>>host_nqn.2016-06.io.spdk:cnode2_* >>>listener_nqn.2016-06.io.spdk:cnode2_client.nvmeof.rbd.cali010.egzbsx_TCP_10.8.130.10_5002 Tests Performed post upgrade: ------------------------------ - Add new namespaces to existing subsystem and run IOs. - Create new subsystems and ensured namespaces are exposed on clients and able to run IOs over those devices. >>>[root@cali010 ubuntu]# podman run registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof-cli:0.0.3-1 --server-address 10.8.130.10 --server-port 5500 create_subsystem --subnqn nqn.2016-06.io.spdk:cnode3 --serial 3 --max-namespaces 32 >>>INFO:__main__:Created subsystem nqn.2016-06.io.spdk:cnode3: True >>>[root@cali010 ubuntu]# >>>[root@cali010 ubuntu]# podman run registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof-cli:0.0.3-1 --server-address 10.8.130.10 --server-port 5500 create_listener --subnqn nqn.2016-06.io.spdk:cnode3 --trsvcid 5003 --gateway-name client.nvmeof.rbd.cali010.egzbsx --traddr 10.8.130.10 >>>INFO:__main__:Created nqn.2016-06.io.spdk:cnode3 listener: True >>>[root@cali010 ubuntu]# podman run registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof-cli:0.0.3-1 --server-address 10.8.130.10 --server-port 5500 create_bdev --image image11 --pool rbd --bdev bdev11 >>>INFO:__main__:Created bdev bdev11: True >>>[root@cali010 ubuntu]# podman run registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof-cli:0.0.3-1 --server-address 10.8.130.10 --server-port 5500 add_namespace --subnqn nqn.2016-06.io.spdk:cnode2 --bdev bdev11 >>>INFO:__main__:Added namespace 3 to nqn.2016-06.io.spdk:cnode2: True >>>[root@cali010 ubuntu]# podman run registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof-cli:0.0.3-1 --server-address 10.8.130.10 --server-port 5500 add_host --subnqn nqn.2016-06.io.spdk:cnode3 --host '*' >>>INFO:__main__:Allowed open host access to nqn.2016-06.io.spdk:cnode3: True >>>[root@cali010 ubuntu]# podman run registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof-cli:0.0.4-1 --server-address 10.8.130.10 --server-port 5500 create_bdev --image image21 --pool rbd --bdev bdev21 >>>INFO:__main__:Created bdev bdev21: True >>>[root@cali010 ubuntu]# podman run registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof-cli:0.0.4-1 --server-address 10.8.130.10 --server-port 5500 create_bdev --image image22 --pool rbd --bdev bdev22 >>>INFO:__main__:Created bdev bdev22: True >>>[root@cali010 ubuntu]# podman run registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof-cli:0.0.3-1 --server-address 10.8.130.10 --server-port 5500 add_namespace --subnqn nqn.2016-06.io.spdk:cnode3 --bdev bdev21 >>>INFO:__main__:Added namespace 1 to nqn.2016-06.io.spdk:cnode3: True >>>[root@cali010 ubuntu]# podman run registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof-cli:0.0.3-1 --server-address 10.8.130.10 --server-port 5500 add_namespace --subnqn nqn.2016-06.io.spdk:cnode3 --bdev bdev22 >>>INFO:__main__:Added namespace 2 to nqn.2016-06.io.spdk:cnode3: True >>>[root@cali010 ubuntu]# podman run registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof-cli:0.0.3-1 --server-address 10.8.130.10 --server-port 5500 get_subsystems >>>INFO:__main__:Get subsystems: >>>[ >>> { >>> "nqn": "nqn.2014-08.org.nvmexpress.discovery", >>> "subtype": "Discovery", >>> "listen_addresses": [], >>> "allow_any_host": true, >>> "hosts": [] >>> }, >>> { >>> "nqn": "nqn.2016-06.io.spdk:cnode2", >>> "subtype": "NVMe", >>> "listen_addresses": [ >>> { >>> "transport": "TCP", >>> "trtype": "TCP", >>> "adrfam": "IPv4", >>> "traddr": "10.8.130.10", >>> "trsvcid": "5002" >>> } >>> ], >>> "allow_any_host": true, >>> "hosts": [], >>> "serial_number": "2", >>> "model_number": "SPDK bdev Controller", >>> "max_namespaces": 32, >>> "min_cntlid": 1, >>> "max_cntlid": 65519, >>> "namespaces": [ >>> { >>> "nsid": 1, >>> "bdev_name": "bdev1", >>> "name": "bdev1", >>> "nguid": "9342F7E4E7BF420695DE3092DB9A627B", >>> "uuid": "9342f7e4-e7bf-4206-95de-3092db9a627b" >>> }, >>> { >>> "nsid": 2, >>> "bdev_name": "bdev2", >>> "name": "bdev2", >>> "nguid": "26AD55B3656142A28BF041BFDCCFB47B", >>> "uuid": "26ad55b3-6561-42a2-8bf0-41bfdccfb47b" >>> }, >>> { >>> "nsid": 3, >>> "bdev_name": "bdev11", >>> "name": "bdev11", >>> "nguid": "58EF6383B0304781BE47639A9692E6EB", >>> "uuid": "58ef6383-b030-4781-be47-639a9692e6eb" >>> } >>> ] >>> }, >>> { >>> "nqn": "nqn.2016-06.io.spdk:cnode3", >>> "subtype": "NVMe", >>> "listen_addresses": [ >>> { >>> "transport": "TCP", >>> "trtype": "TCP", >>> "adrfam": "IPv4", >>> "traddr": "10.8.130.10", >>> "trsvcid": "5003" >>> } >>> ], >>> "allow_any_host": true, >>> "hosts": [], >>> "serial_number": "3", >>> "model_number": "SPDK bdev Controller", >>> "max_namespaces": 32, >>> "min_cntlid": 1, >>> "max_cntlid": 65519, >>> "namespaces": [ >>> { >>> "nsid": 1, >>> "bdev_name": "bdev21", >>> "name": "bdev21", >>> "nguid": "979E9C8A0E0F4E39AB02BDE6451CB499", >>> "uuid": "979e9c8a-0e0f-4e39-ab02-bde6451cb499" >>> }, >>> { >>> "nsid": 2, >>> "bdev_name": "bdev22", >>> "name": "bdev22", >>> "nguid": "D3AF2DEEF6AE411096BCFC065F2525CE", >>> "uuid": "d3af2dee-f6ae-4110-96bc-fc065f2525ce" >>> } >>> ] >>> } >>>] >>>[root@cali012 ubuntu]# lsblk >>>NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS >>>sda 8:0 0 3.5T 0 disk >>>sdb 8:16 0 446.6G 0 disk >>>├─sdb1 8:17 0 1G 0 part /boot >>>├─sdb2 8:18 0 4G 0 part [SWAP] >>>└─sdb3 8:19 0 441.6G 0 part / >>>sdc 8:32 0 3.5T 0 disk >>>sdd 8:48 0 2.2T 0 disk >>>sde 8:64 0 2.2T 0 disk >>>sdf 8:80 0 2.2T 0 disk >>>sdg 8:96 0 2.2T 0 disk >>>nvme0n1 259:0 0 1.5T 0 disk >>>nvme1n1 259:2 0 100G 0 disk >>>nvme1n2 259:4 0 100G 0 disk >>>nvme1n3 259:6 0 10G 0 disk >>>[root@cali012 ubuntu]# nvme list >>>Node Generic SN Model Namespace Usage Format FW Rev >>>--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- >>>/dev/nvme1n3 /dev/ng1n3 2 SPDK bdev Controller 3 10.74 GB / 10.74 GB 4 KiB + 0 B 23.01.1 >>>/dev/nvme1n2 /dev/ng1n2 2 SPDK bdev Controller 2 107.37 GB / 107.37 GB 4 KiB + 0 B 23.01.1 >>>/dev/nvme1n1 /dev/ng1n1 2 SPDK bdev Controller 1 107.37 GB / 107.37 GB 4 KiB + 0 B 23.01.1 >>>/dev/nvme0n1 /dev/ng0n1 X1N0A10VTC88 Dell Ent NVMe CM6 MU 1.6TB 1 257.30 GB / 1.60 TB 512 B + 0 B 2.1.8 >>>[root@cali012 ubuntu]# nvme discover -t tcp -a 10.8.130.10 -s 5003Discovery Log Number of Records 2, Generation counter 3 >>>=====Discovery Log Entry 0====== >>>trtype: tcp >>>adrfam: ipv4 >>>subtype: nvme subsystem >>>treq: not required >>>portid: 0 >>>trsvcid: 5002 >>>subnqn: nqn.2016-06.io.spdk:cnode2 >>>traddr: 10.8.130.10 >>>eflags: not specified >>>sectype: none >>>=====Discovery Log Entry 1====== >>>trtype: tcp >>>adrfam: ipv4 >>>subtype: nvme subsystem >>>treq: not required >>>portid: 0 >>>trsvcid: 5003 >>>subnqn: nqn.2016-06.io.spdk:cnode3 >>>traddr: 10.8.130.10 >>>eflags: not specified >>>sectype: none >>>[root@cali012 ubuntu]# nvme connect -t tcp -a 10.8.130.10 -s 5003 -n nqn.2016-06.io.spdk:cnode3 >>>[root@cali012 ubuntu]# lsblk >>>NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS >>>sda 8:0 0 3.5T 0 disk >>>sdb 8:16 0 446.6G 0 disk >>>├─sdb1 8:17 0 1G 0 part /boot >>>├─sdb2 8:18 0 4G 0 part [SWAP] >>>└─sdb3 8:19 0 441.6G 0 part / >>>sdc 8:32 0 3.5T 0 disk >>>sdd 8:48 0 2.2T 0 disk >>>sde 8:64 0 2.2T 0 disk >>>sdf 8:80 0 2.2T 0 disk >>>sdg 8:96 0 2.2T 0 disk >>>nvme0n1 259:0 0 1.5T 0 disk >>>nvme1n1 259:2 0 100G 0 disk >>>nvme1n2 259:4 0 100G 0 disk >>>nvme1n3 259:6 0 10G 0 disk >>>nvme2n1 259:8 0 15G 0 disk >>>nvme2n2 259:10 0 15G 0 disk >>>[root@cali012 ubuntu]# nvme list >>>Node Generic SN Model Namespace Usage Format FW Rev >>>--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- >>>/dev/nvme2n2 /dev/ng2n2 3 SPDK bdev Controller 2 16.11 GB / 16.11 GB 512 B + 0 B 23.01.1 >>>/dev/nvme2n1 /dev/ng2n1 3 SPDK bdev Controller 1 16.11 GB / 16.11 GB 512 B + 0 B 23.01.1 >>>/dev/nvme1n3 /dev/ng1n3 2 SPDK bdev Controller 3 10.74 GB / 10.74 GB 4 KiB + 0 B 23.01.1 >>>/dev/nvme1n2 /dev/ng1n2 2 SPDK bdev Controller 2 107.37 GB / 107.37 GB 4 KiB + 0 B 23.01.1 >>>/dev/nvme1n1 /dev/ng1n1 2 SPDK bdev Controller 1 107.37 GB / 107.37 GB 4 KiB + 0 B 23.01.1 >>>/dev/nvme0n1 /dev/ng0n1 X1N0A10VTC88 Dell Ent NVMe CM6 MU 1.6TB 1 257.30 GB / 1.60 TB 512 B + 0 B 2.1.8 >>>[root@cali012 ubuntu]# fio --ioengine libaio --filename /dev/nvme2n1 --size 100% --name test-1 --numjobs 1 --rw write --size=100% --iodepth 8 --fsync 32 --group_reporting --bs 4k >>>test-1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8 >>>fio-3.27 >>>Starting 1 process >>>Jobs: 1 (f=1): [f(1)][100.0%][w=37.0MiB/s][w=9479 IOPS][eta 00m:00s] >>>test-1: (groupid=0, jobs=1): err= 0: pid=21272: Thu Oct 26 06:38:03 2023 >>> write: IOPS=25.3k, BW=98.9MiB/s (104MB/s)(15.0GiB/155253msec); 0 zone resets >>> slat (nsec): min=1479, max=299168, avg=2501.99, stdev=2194.60 >>> clat (nsec): min=1350, max=29123k, avg=276762.82, stdev=506938.60 >>> lat (usec): min=3, max=29125, avg=279.37, stdev=506.75 >>> clat percentiles (usec): >>> | 1.00th=[ 17], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 18], >>> | 30.00th=[ 19], 40.00th=[ 19], 50.00th=[ 19], 60.00th=[ 21], >>> | 70.00th=[ 28], 80.00th=[ 1037], 90.00th=[ 1156], 95.00th=[ 1205], >>> | 99.00th=[ 1516], 99.50th=[ 2180], 99.90th=[ 2606], 99.95th=[ 2835], >>> | 99.99th=[ 3621] >>> bw ( KiB/s): min=78024, max=105472, per=100.00%, avg=101329.57, stdev=2301.74, samples=310 >>> iops : min=19506, max=26368, avg=25332.39, stdev=575.44, samples=310 >>> lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=59.76%, 50=18.22% >>> lat (usec) : 100=0.15%, 500=0.01%, 1000=0.87% >>> lat (msec) : 2=20.29%, 4=0.71%, 10=0.01%, 20=0.01%, 50=0.01% >>> fsync/fdatasync/sync_file_range: >>> sync (nsec): min=77, max=8319, avg=274.06, stdev=119.07 >>> sync percentiles (nsec): >>> | 1.00th=[ 94], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 282], >>> | 30.00th=[ 306], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 310], >>> | 70.00th=[ 314], 80.00th=[ 318], 90.00th=[ 322], 95.00th=[ 330], >>> | 99.00th=[ 346], 99.50th=[ 362], 99.90th=[ 402], 99.95th=[ 564], >>> | 99.99th=[ 6432] >>> cpu : usr=2.17%, sys=8.81%, ctx=677774, majf=0, minf=21 >>> IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=103.1%, 16=0.0%, 32=0.0%, >=64=0.0% >>> submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% >>> complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% >>> issued rwts: total=0,3932160,0,122879 short=0,0,0,0 dropped=0,0,0,0 >>> latency : target=0, window=0, percentile=100.00%, depth=8Run status group 0 (all jobs): >>> WRITE: bw=98.9MiB/s (104MB/s), 98.9MiB/s-98.9MiB/s (104MB/s-104MB/s), io=15.0GiB (16.1GB), run=155253-155253msecDisk stats (read/write): >>> nvme2n1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% >>>[root@cali012 ubuntu]# fio --ioengine libaio --filename /dev/nvme2n2 --size 100% --name test-1 --numjobs 1 --rw write --size=100% --iodepth 8 --fsync 32 --group_reporting --bs 4k >>>test-1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8 >>>fio-3.27 >>>Starting 1 process >>>Jobs: 1 (f=1): [f(1)][100.0%][w=20.7MiB/s][w=5287 IOPS][eta 00m:00s] >>>test-1: (groupid=0, jobs=1): err= 0: pid=21391: Thu Oct 26 06:42:51 2023 >>> write: IOPS=25.5k, BW=99.7MiB/s (105MB/s)(15.0GiB/154064msec); 0 zone resets >>> slat (nsec): min=1451, max=55941, avg=2515.86, stdev=2268.82 >>> clat (nsec): min=1210, max=7057.0k, avg=274634.40, stdev=494028.14 >>> lat (usec): min=3, max=7060, avg=277.26, stdev=493.83 >>> clat percentiles (usec): >>> | 1.00th=[ 18], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 19], >>> | 30.00th=[ 19], 40.00th=[ 19], 50.00th=[ 20], 60.00th=[ 21], >>> | 70.00th=[ 28], 80.00th=[ 1029], 90.00th=[ 1156], 95.00th=[ 1205], >>> | 99.00th=[ 1516], 99.50th=[ 2147], 99.90th=[ 2540], 99.95th=[ 2737], >>> | 99.99th=[ 3359] >>> bw ( KiB/s): min=98048, max=105984, per=100.00%, avg=102107.25, stdev=1486.37, samples=308 >>> iops : min=24512, max=26496, avg=25526.81, stdev=371.59, samples=308 >>> lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=57.40%, 50=20.56% >>> lat (usec) : 100=0.16%, 1000=1.04% >>> lat (msec) : 2=20.15%, 4=0.68%, 10=0.01% >>> fsync/fdatasync/sync_file_range: >>> sync (nsec): min=77, max=11386, avg=274.93, stdev=136.54 >>> sync percentiles (nsec): >>> | 1.00th=[ 93], 5.00th=[ 95], 10.00th=[ 98], 20.00th=[ 127], >>> | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 310], 60.00th=[ 314], >>> | 70.00th=[ 322], 80.00th=[ 322], 90.00th=[ 338], 95.00th=[ 342], >>> | 99.00th=[ 350], 99.50th=[ 354], 99.90th=[ 394], 99.95th=[ 556], >>> | 99.99th=[ 7328] >>> cpu : usr=2.21%, sys=8.90%, ctx=733233, majf=0, minf=19 >>> IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=103.1%, 16=0.0%, 32=0.0%, >=64=0.0% >>> submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% >>> complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% >>> issued rwts: total=0,3932160,0,122879 short=0,0,0,0 dropped=0,0,0,0 >>> latency : target=0, window=0, percentile=100.00%, depth=8Run status group 0 (all jobs): >>> WRITE: bw=99.7MiB/s (105MB/s), 99.7MiB/s-99.7MiB/s (105MB/s-105MB/s), io=15.0GiB (16.1GB), run=154064-154064msecDisk stats (read/write): >>> nvme2n2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% >>>
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:7780