Bug 1884005 - [cephadm]Error with "no container with name or id" is seen in the logs for the services which has containers in running state
Summary: [cephadm]Error with "no container with name or id" is seen in the logs for th...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: x86_64
OS: Linux
medium
low
Target Milestone: ---
: 5.0
Assignee: Adam King
QA Contact: Vasishta
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-30 18:15 UTC by Preethi
Modified: 2021-08-30 08:27 UTC (History)
3 users (show)

Fixed In Version: ceph-16.1.0-486.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:26:43 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 46247 0 None None None 2020-10-01 08:01:39 UTC
Red Hat Issue Tracker RHCEPH-1033 0 None None None 2021-08-27 04:43:42 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:27:01 UTC

Description Preethi 2020-09-30 18:15:44 UTC
Description of problem:[cephadm]Error with "no container with name or id" is seen fin the logs for the services which has containers in running state


Version-Release number of selected component (if applicable):
[root@magna094 ubuntu]# ./cephadm version
Using recent ceph image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-82880-20200915232213
ceph version 16.0.0-5535.el8cp (ebdb8e56e55488bf2280b4da6c370936940ee554) pacific (dev)
[root@magna094 ubuntu]# 


How reproducible:


Steps to Reproduce:
1. Bootstrap a cluster with min 3 mons, 2 mgr and osds
2. Check health is ok 
3. Exit the cephadm shell and issue systemctl status command for mgr daemon service and check the behaviour

Same errors were seen in journalctl logs as well. Below is the snippet of the error


Actual results:

[root@magna094 ubuntu]# systemctl status ceph-f0309064-fca1-11ea-81e2-002590fbecb6.wbdunl.service

● ceph-f0309064-fca1-11ea-81e2-002590fbecb6.wbdunl.service - Ceph mgr.magna094.wbdunl for f0309064-fca1-11ea-81e2-002590fbecb6
   Loaded: loaded (/etc/systemd/system/ceph-f0309064-fca1-11ea-81e2-002590fbecb6@.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2020-09-30 11:42:18 UTC; 6h ago
  Process: 1332657 ExecStopPost=/bin/rm -f //run/ceph-f0309064-fca1-11ea-81e2-002590fbecb6.wbdunl.service-pid //run/ceph-f0309064-fca1-11ea-81e2-002590fbecb6.wbdunl.service-cid (code=e>
  Process: 1332655 ExecStopPost=/bin/bash /var/lib/ceph/f0309064-fca1-11ea-81e2-002590fbecb6/mgr.magna094.wbdunl/unit.poststop (code=exited, status=0/SUCCESS)
  Process: 1332271 ExecStop=/bin/podman stop ceph-f0309064-fca1-11ea-81e2-002590fbecb6-mgr.magna094.wbdunl (code=exited, status=0/SUCCESS)
  Process: 1333087 ExecStart=/bin/bash /var/lib/ceph/f0309064-fca1-11ea-81e2-002590fbecb6/mgr.magna094.wbdunl/unit.run (code=exited, status=0/SUCCESS)
  Process: 1333080 ExecStartPre=/bin/rm -f //run/ceph-f0309064-fca1-11ea-81e2-002590fbecb6.wbdunl.service-pid //run/ceph-f0309064-fca1-11ea-81e2-002590fbecb6.wbdunl.service-cid (code=e>
  Process: 1332959 ExecStartPre=/bin/podman rm ceph-f0309064-fca1-11ea-81e2-002590fbecb6-mgr.magna094.wbdunl (code=exited, status=1/FAILURE)
 Main PID: 1333419 (conmon)
    Tasks: 0 (limit: 204376)
   Memory: 4.7M
   CGroup: /system.slice/system-ceph\x2df0309064\x2dfca1\x2d11ea\x2d81e2\x2d002590fbecb6.slice/ceph-f0309064-fca1-11ea-81e2-002590fbecb6.wbdunl.service
           ‣ 1333419 /usr/libexec/podman/conmon -s -c b04e20b86445ff3a1feea049e7c733602ab8b4853157acc37bbb12ffb0a09dcc -u b04e20b86445ff3a1feea049e7c733602ab8b4853157acc37bbb12ffb0a09dcc -n ceph-f0309064-fca1-1>

Sep 30 11:42:16 magna094 systemd[1]: Starting Ceph mgr.magna094.wbdunl for f0309064-fca1-11ea-81e2-002590fbecb6...
Sep 30 11:42:16 magna094 podman[1332959]: Error: no container with name or ID ceph-f0309064-fca1-11ea-81e2-002590fbecb6-mgr.magna094.wbdunl found: no such container
Sep 30 11:42:17 magna094 bash[1333087]: Error: no container with name or ID ceph-f0309064-fca1-11ea-81e2-002590fbecb6-mgr.magna094.wbdunl found: no such container
Sep 30 11:42:17 magna094 bash[1333087]: ceph-f0309064-fca1-11ea-81e2-002590fbecb6-mgr.magna094.wbdunl
Sep 30 11:42:17 magna094 bash[1333087]: Error: no container with ID or name "ceph-f0309064-fca1-11ea-81e2-002590fbecb6-mgr.magna094.wbdunl" found: no such container
Sep 30 11:42:17 magna094 podman[1333257]: 2020-09-30 11:42:17.397326899 +0000 UTC m=+0.269446727 container create b04e20b86445ff3a1feea049e7c733602ab8b4853157acc37bbb12ffb0a09dcc (image=registry-proxy.engineeri>
Sep 30 11:42:17 magna094 podman[1333257]: 2020-09-30 11:42:17.964866242 +0000 UTC m=+0.836986075 container init b04e20b86445ff3a1feea049e7c733602ab8b4853157acc37bbb12ffb0a09dcc (image=registry-proxy.engineering>
Sep 30 11:42:18 magna094 podman[1333257]: 2020-09-30 11:42:18.030630214 +0000 UTC m=+0.902750058 container start b04e20b86445ff3a1feea049e7c733602ab8b4853157acc37bbb12ffb0a09dcc (image=registry-proxy.engineerin>
Sep 30 11:42:18 magna094 bash[1333087]: b04e20b86445ff3a1feea049e7c733602ab8b4853157acc37bbb12ffb0a09dcc
Sep 30 11:42:18 magna094 systemd[1]: Started Ceph mgr.magna094.wbdunl for f0309064-fca1-11ea-81e2-002590fbecb6.
**************************************************************************

[root@magna094 ubuntu]# journalctl -u ceph-f0309064-fca1-11ea-81e2-002590fbecb6.wbdunl.service
-- Logs begin at Wed 2020-09-16 11:27:08 UTC, end at Wed 2020-09-30 17:52:42 UTC. --
Sep 22 07:05:24 magna094 systemd[1]: Starting Ceph mgr.magna094.wbdunl for f0309064-fca1-11ea-81e2-002590fbecb6...
Sep 22 07:05:24 magna094 podman[528914]: Error: no container with name or ID ceph-f0309064-fca1-11ea-81e2-002590fbecb6-mgr.magna094.wbdunl found: no such container
Sep 22 07:05:24 magna094 bash[528936]: Error: no container with name or ID ceph-f0309064-fca1-11ea-81e2-002590fbecb6-mgr.magna094.wbdunl found: no such container
Sep 22 07:05:24 magna094 bash[528936]: ceph-f0309064-fca1-11ea-81e2-002590fbecb6-mgr.magna094.wbdunl
Sep 22 07:05:24 magna094 bash[528936]: Error: no container with ID or name "ceph-f0309064-fca1-11ea-81e2-002590fbecb6-mgr.magna094.wbdunl" found: no such container
Sep 22 07:05:24 magna094 podman[528978]: 2020-09-22 07:05:24.679563749 +0000 UTC m=+0.236624762 container create f6ed471d2b68b5ca503b8ab32bfc349bc02c2fb544e620058c8407838c6a6f94 (image=registry-proxy.engineerin>
Sep 22 07:05:25 magna094 podman[528978]: 2020-09-22 07:05:25.146217359 +0000 UTC m=+0.703278373 container init f6ed471d2b68b5ca503b8ab32bfc349bc02c2fb544e620058c8407838c6a6f94 (image=registry-proxy.engineering.>
Sep 22 07:05:25 magna094 podman[528978]: 2020-09-22 07:05:25.20446507 +0000 UTC m=+0.761526091 container start f6ed471d2b68b5ca503b8ab32bfc349bc02c2fb544e620058c8407838c6a6f94 (image=registry-proxy.engineering.>
Sep 22 07:05:25 magna094 bash[528936]: f6ed471d2b68b5ca503b8ab32bfc349bc02c2fb544e620058c8407838c6a6f94
Sep 22 07:05:25 magna094 systemd[1]: Started Ceph mgr.magna094.wbdunl for f0309064-fca1-11ea-81e2-002590fbecb6.
Sep 29 15:29:00 magna094 systemd[1]: Stopping Ceph mgr.magna094.wbdunl for f0309064-fca1-11ea-81e2-002590fbecb6...
Sep 29 15:29:00 magna094 podman[1247011]: 2020-09-29 15:29:00.701817665 +0000 UTC m=+0.245717994 container died f6ed471d2b68b5ca503b8ab32bfc349bc02c2fb544e620058c8407838c6a6f94 (image=registry-proxy.engineering>
Sep 29 15:29:00 magna094 podman[1247011]: 2020-09-29 15:29:00.783305455 +0000 UTC m=+0.327205722 container stop f6ed471d2b68b5ca503b8ab32bfc349bc02c2fb544e620058c8407838c6a6f94 (image=registry-proxy.engineering>
Sep 29 15:29:00 magna094 podman[1247011]: f6ed471d2b68b5ca503b8ab32bfc349bc02c2fb544e620058c8407838c6a6f94
Sep 29 15:29:01 magna094 systemd[1]: Stopped Ceph mgr.magna094.wbdunl for f0309064-fca1-11ea-81e2-002590fbecb6.
Sep 29 15:33:09 magna094 systemd[1]: Starting Ceph mgr.magna094.wbdunl for f0309064-fca1-11ea-81e2-002590fbecb6...
Sep 29 15:33:09 magna094 podman[1247850]: Error: no container with name or ID ceph-f0309064-fca1-11ea-81e2-002590fbecb6-mgr.magna094.wbdunl found: no such container
Sep 29 15:33:09 magna094 bash[1247873]: Error: no container with name or ID ceph-f0309064-fca1-11ea-81e2-002590fbecb6-mgr.magna094.wbdunl found: no such container

**********************************************************

./cephadm ls - reports container in running state

  {
        "style": "cephadm:v1",
        "name": "mgr.magna094.wbdunl",
        "fsid": "f0309064-fca1-11ea-81e2-002590fbecb6",
        "systemd_unit": "ceph-f0309064-fca1-11ea-81e2-002590fbecb6.wbdunl",
        "enabled": true,
        "state": "running",
        "container_id": "b04e20b86445ff3a1feea049e7c733602ab8b4853157acc37bbb12ffb0a09dcc",
        "container_image_name": "registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-82880-20200915232213",
        "container_image_id": "5be7c66ea2b0da5ee0a6ecb1ab90d40d37b16a679a11166abac9d18b17dd5923",
        "version": "16.0.0-5535.el8cp",
        "started": "2020-09-30T11:42:17.194575",
        "created": "2020-09-22T07:05:25.213855",
        "deployed": "2020-09-22T07:05:23.890881",
        "configured": "2020-09-22T07:38:24.050353"



Expected results: No container errors should be seen when containers are in running state


Additional info:

Comment 1 Adam King 2021-03-04 13:09:08 UTC
I believe this got fixed upstream here https://github.com/ceph/ceph/pull/38804. I couldn't see any such error messsages using latest upstream image.

[root@vm-00 ~]# systemctl status ceph-74009ba2-7ce8-11eb-b007-525400ef5c50.bjeysa
● ceph-74009ba2-7ce8-11eb-b007-525400ef5c50.bjeysa.service - Ceph mgr.vm-00.bjeysa for 74009ba2-7ce8-11eb-b007-525400ef5c50
   Loaded: loaded (/etc/systemd/system/ceph-74009ba2-7ce8-11eb-b007-525400ef5c50@.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2021-03-04 12:53:15 UTC; 7min ago
 Main PID: 12217 (conmon)
    Tasks: 2 (limit: 4683)
   Memory: 1.8M
      CPU: 495ms
   CGroup: /system.slice/system-ceph\x2d74009ba2\x2d7ce8\x2d11eb\x2db007\x2d525400ef5c50.slice/ceph-74009ba2-7ce8-11eb-b007-525400ef5c50.bjeysa.service
           └─12217 /usr/bin/conmon --api-version 1 -c 8a401d2ef713c34a63d5324eab982066edf72fa71a2fca5019405c1914883aef -u 8a401d2ef713c34a63d5324eab982066edf72fa71a2fca5019405c1914883aef -r /usr/bin/crun -b /var/lib/containers/>

Mar 04 13:00:46 vm-00 conmon[12217]: 
Mar 04 13:00:48 vm-00 conmon[12217]: debug 
Mar 04 13:00:48 vm-00 conmon[12217]: 2021-03-04T13:00:48.827+0000 7fa23e12e700  0 log_channel(cluster) log [DBG] : pgmap v195: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar 04 13:00:48 vm-00 conmon[12217]: 
Mar 04 13:00:50 vm-00 conmon[12217]: debug 
Mar 04 13:00:50 vm-00 conmon[12217]: 2021-03-04T13:00:50.827+0000 7fa23e12e700  0 log_channel(cluster) log [DBG] : pgmap v196: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar 04 13:00:50 vm-00 conmon[12217]: 
Mar 04 13:00:52 vm-00 conmon[12217]: debug 
Mar 04 13:00:52 vm-00 conmon[12217]: 2021-03-04T13:00:52.828+0000 7fa23e12e700  0 log_channel(cluster) log [DBG] : pgmap v197: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar 04 13:00:52 vm-00 conmon[12217]: 


@Preethi can you confirm if this issue is still present or not with latest downstream image?

Comment 2 Preethi 2021-03-05 16:26:06 UTC
@Adam, Am not seeing the issue with latest compose. Below output for reference. 




[root@magna021 ubuntu]# systemctl status ceph-aa1c72ac-7d0f-11eb-923c-002590fc2a2e.ceph.redhat.com.service
● ceph-aa1c72ac-7d0f-11eb-923c-002590fc2a2e.ceph.redhat.com.service - Ceph mon.magna021.ceph.redhat.com for aa1c72ac-7d0f-11eb-923c-002590fc2a2e
   Loaded: loaded (/etc/systemd/system/ceph-aa1c72ac-7d0f-11eb-923c-002590fc2a2e@.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2021-03-04 17:34:05 UTC; 22h ago
 Main PID: 80653 (conmon)
    Tasks: 2 (limit: 204080)
   Memory: 7.5M
   CGroup: /system.slice/system-ceph\x2daa1c72ac\x2d7d0f\x2d11eb\x2d923c\x2d002590fc2a2e.slice/ceph-aa1c72ac-7d0f-11eb-923c-002590fc2a2e.ceph.redhat.com.service
           └─80653 /usr/bin/conmon --api-version 1 -c 078c2c4c8059ee9da298d932d56d98ccf41ad8b3bdd9b63ad4861ad70ed6314a -u 078c2c4c8059ee9da298d932d56d98ccf41ad8b3bdd9b63ad4861ad70ed6314a -r /usr/bin/runc -b /va>

Mar 05 16:21:18 magna021.ceph.redhat.com conmon[80653]: cephadm 2021-03-05T16:21:17.646876+0000 mgr.magna021.ceph.redhat.com.xvjakn (mgr.14170) 50072 : cephadm [INF] Refreshing plena006.ceph.redhat.com facts
Mar 05 16:21:18 magna021.ceph.redhat.com conmon[80653]: audit 2021-
Mar 05 16:21:18 magna021.ceph.redhat.com conmon[80653]: 03-05T16:21:17.729717+0000 mon.magna021.ceph.redhat.com (mon.0) 21524
Mar 05 16:21:18 magna021.ceph.redhat.com conmon[80653]:  : audit [INF] from='mgr.14170 10.8.128.21:0/1697975695' entity='mgr.magna021.ceph.redhat.com.xvjakn' 
Mar 05 16:21:18 magna021.ceph.redhat.com conmon[80653]: cephadm 
Mar 05 16:21:18 magna021.ceph.redhat.com conmon[80653]: 2021-03-05T16:21:17.730232+0000 mgr.magna021.ceph.redhat.com.xvjakn (mgr
Mar 05 16:21:18 magna021.ceph.redhat.com conmon[80653]: .14170) 50073 : cephadm [INF] Refreshing plena005.ceph.redhat.com facts
Mar 05 16:21:18 magna021.ceph.redhat.com conmon[80653]: debug 2021-03-05T16:21:18.767+0000 7fdffa3c7700  0 mon.magna021.ceph.redhat.com@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "m>
Mar 05 16:21:18 magna021.ceph.redhat.com conmon[80653]: debug 2021-03-05T16:21:18.767+0000 7fdffa3c7700  0 log_channel(audit) log [DBG] : from='mgr.14170 10.8.128.21:0/1697975695' entity='mgr.magna021.ceph.redh>
Mar 05 16:21:19 magna021.ceph.redhat.com conmon[80653]: debug 2021-03-05T16:21:19.158+0000 7fdffcbcc700  1 mon.magna021.ceph.redhat.com@0(leader).osd e3606 _set_new_cache_sizes cache_size:1020054731 inc_alloc: >

[root@magna021 ubuntu]# journalctl -u ceph-aa1c72ac-7d0f-11eb-923c-002590fc2a2e.ceph.redhat.com.service
-- Logs begin at Thu 2021-03-04 16:33:10 UTC, end at Fri 2021-03-05 16:21:49 UTC. --
Mar 04 17:33:57 magna021.ceph.redhat.com systemd[1]: Starting Ceph mon.magna021.ceph.redhat.com for aa1c72ac-7d0f-11eb-923c-002590fc2a2e...
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.509+0000 7f896ab32700  0 set uid:gid to 167:167 (ceph:ceph)
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.509+0000 7f896ab32700  0 ceph version 16.1.0-486.el8cp (f9701a56b7b8182352532afba8db2bf394c8585a) pacific (rc), process ceph-mon>
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.509+0000 7f896ab32700  0 pidfile_write: ignore empty --pid-file
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.515+0000 7f896ab32700  0 load: jerasure load: lrc load: isa 
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb: RocksDB version: 6.8.1
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: 
Mar 04 17:33:58 magna021.ceph.redhat.com bash[79768]: 227031e89316fcf13d302da47c6b42a4f19600fbf105c388d3dcf042ac21dd55
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb: Git sha rocksdb_build_git_sha:@0@
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb: Compile date Mar  1 2021
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb: DB SUMMARY
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: 
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb: CURRENT file:  CURRENT
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: 
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb: IDENTITY file:  IDENTITY
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: 
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb: MANIFEST file:  MANIFEST-000001 size: 13 Bytes
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: 
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb: SST files in /var/lib/ceph/mon/ceph-magna021.ceph.redhat.com/store.db dir, Total Num: 0, files>
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: 
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-magna021.ceph.redhat.com/store.db: 000003.log s>
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: 
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb:                         Options.error_if_exists: 0
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb:                       Options.create_if_missing: 0
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb:                         Options.paranoid_checks: 1
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb:                                     Options.env: 0x5601348761c0
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb:                                      Options.fs: Posix File System
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb:                                Options.info_log: 0x5601353b1f40
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb:                Options.max_file_opening_threads: 16
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb:                              Options.statistics: (nil)
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb:                               Options.use_fsync: 0
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb:                       Options.max_log_file_size: 0
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb:                  Options.max_manifest_file_size: 1073741824
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb:                   Options.log_file_time_to_roll: 0
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb:                       Options.keep_log_file_num: 1000
Mar 04 17:33:58 magna021.ceph.redhat.com conmon[79891]: debug 2021-03-04T17:33:58.516+0000 7f896ab32700  4 rocksdb:                    Options.recycle_log_file_num: 0
Mar 04 17:33:58 magna021.ceph.redhat.com c

Comment 5 errata-xmlrpc 2021-08-30 08:26:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.