Bug 1884129 - [ cephadm ] Volume group "ceph-f6c313e5-e91a-4ba1-a750-b800e54c06f6" has insufficient free space (0 extents): 953861 required.
Summary: [ cephadm ] Volume group "ceph-f6c313e5-e91a-4ba1-a750-b800e54c06f6" has insu...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Linux
unspecified
high
Target Milestone: ---
: 5.0
Assignee: Juan Miguel Olmo
QA Contact: Sunil Kumar Nagaraju
Karen Norteman
URL:
Whiteboard:
: 1884130 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-10-01 06:55 UTC by Sunil Kumar Nagaraju
Modified: 2021-08-30 08:27 UTC (History)
5 users (show)

Fixed In Version: ceph-16.1.0-486.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:26:43 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 48383 0 None None None 2020-11-26 17:09:07 UTC
Github ceph ceph pull 38687 0 None closed ceph-volume: add some flexibility to bytes_to_extents 2021-02-18 05:18:19 UTC
Red Hat Issue Tracker RHCEPH-1041 0 None None None 2021-08-27 04:51:50 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:27:01 UTC

Comment 1 RHEL Program Management 2020-10-01 06:55:45 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 3 Juan Miguel Olmo 2020-10-02 08:32:21 UTC
*** Bug 1884130 has been marked as a duplicate of this bug. ***

Comment 5 Juan Miguel Olmo 2020-11-03 16:43:05 UTC
Hi Sunil,

In any case i think that the problem is directly linked with the infrastructure and laboratory used to do the tests.(or even the tests themselves)

Once we know that this not a problem around Cephadm. Can you please close the bug or move it to another component?

Comment 6 Preethi 2020-11-23 06:43:01 UTC
@Juan, Issue is not seen with latest pacific build. The above issue was seen in automation however, we tried manually simulating the steps and i do not see any delay.

Zap was completed successfully and later Add OSD was also successful without any delays.

Regards,
Preethi

Comment 15 Preethi 2021-02-03 14:34:56 UTC
@Juan, We are seeing ceph orch osd add failures in Teuthology runs with latest compose. 

http://magna002.ceph.redhat.com/pnataraj-2021-02-01_14:05:06-smoke:cephadm-master-distro-basic-clara/395962/tasks/rhcephadm-1.log

Output snippet:
2021-02-01T14:22:05.076 INFO:teuthology.orchestra.run.clara001.stderr:--> Zapping successful for: <Raw Device: /dev/sdd>
2021-02-01T14:22:05.225 DEBUG:teuthology.orchestra.run.clara001:> sudo /home/ubuntu/cephtest/cephadm --image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-26800-20210129231628 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f7fad68c-64c1-11eb-95d0-002590fc2776 -- ceph orch daemon add osd clara001:/dev/sdd
2021-02-01T14:22:06.231 INFO:journalctl.z.clara003.stdout:Feb 01 19:16:39 clara003 systemd[1]: Starting Ceph mgr.z for f7fad68c-64c1-11eb-95d0-002590fc2776...
2021-02-01T14:22:07.481 INFO:journalctl.z.clara003.stdout:Feb 01 19:16:40 clara003 bash[13769]: d99e4e2c26ea48e84299b761d86be9f701296c32bc1d11bd276ea2a06c6305e1
2021-02-01T14:22:07.481 INFO:journalctl.z.clara003.stdout:Feb 01 19:16:40 clara003 systemd[1]: Started Ceph mgr.z for f7fad68c-64c1-11eb-95d0-002590fc2776.
2021-02-01T14:22:09.233 INFO:journalctl.y.clara014.stdout:Feb 01 19:16:42 clara014 systemd[1]: Starting Ceph mgr.y for f7fad68c-64c1-11eb-95d0-002590fc2776...
2021-02-01T14:22:10.233 INFO:journalctl.y.clara014.stdout:Feb 01 19:16:43 clara014 bash[13324]: 4b80975f94699ae0abcba5bb1d2b62e3dda0f716212dfe171eda50ef8cb51553
2021-02-01T14:22:10.234 INFO:journalctl.y.clara014.stdout:Feb 01 19:16:43 clara014 systemd[1]: Started Ceph mgr.y for f7fad68c-64c1-11eb-95d0-002590fc2776.
2021-02-01T14:22:10.959 INFO:teuthology.orchestra.run.clara001.stderr:Error EINVAL: Traceback (most recent call last):
2021-02-01T14:22:10.960 INFO:teuthology.orchestra.run.clara001.stderr:  File "/usr/share/ceph/mgr/mgr_module.py", line 1269, in _handle_command
2021-02-01T14:22:10.960 INFO:teuthology.orchestra.run.clara001.stderr:    return self.handle_command(inbuf, cmd)
2021-02-01T14:22:10.961 INFO:teuthology.orchestra.run.clara001.stderr:  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 150, in handle_command
2021-02-01T14:22:10.961 INFO:teuthology.orchestra.run.clara001.stderr:    return dispatch[cmd['prefix']].call(self, cmd, inbuf)
2021-02-01T14:22:10.962 INFO:teuthology.orchestra.run.clara001.stderr:  File "/usr/share/ceph/mgr/mgr_module.py", line 380, in call
2021-02-01T14:22:10.963 INFO:teuthology.orchestra.run.clara001.stderr:    return self.func(mgr, **kwargs)
2021-02-01T14:22:10.963 INFO:teuthology.orchestra.run.clara001.stderr:  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 108, in <lambda>
2021-02-01T14:22:10.964 INFO:teuthology.orchestra.run.clara001.stderr:    wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
2021-02-01T14:22:10.964 INFO:teuthology.orchestra.run.clara001.stderr:  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 97, in wrapper
2021-02-01T14:22:10.965 INFO:teuthology.orchestra.run.clara001.stderr:    return func(*args, **kwargs)
2021-02-01T14:22:10.965 INFO:teuthology.orchestra.run.clara001.stderr:  File "/usr/share/ceph/mgr/orchestrator/module.py", line 823, in _daemon_add_osd
2021-02-01T14:22:10.966 INFO:teuthology.orchestra.run.clara001.stderr:    raise_if_exception(completion)
2021-02-01T14:22:10.967 INFO:teuthology.orchestra.run.clara001.stderr:  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 652, in raise_if_exception
2021-02-01T14:22:10.967 INFO:teuthology.orchestra.run.clara001.stderr:    raise e
2021-02-01T14:22:10.968 INFO:teuthology.orchestra.run.clara001.stderr:RuntimeError: cephadm exited with an error code: 1, stderr:/bin/podman: stderr --> passed data devices: 1 physical, 0 LVM
2021-02-01T14:22:10.968 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr --> relative data size: 1.0
2021-02-01T14:22:10.969 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr Running command: /usr/bin/ceph-authtool --gen-print-key
2021-02-01T14:22:10.969 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 0ea76097-69f6-4e59-8a09-c94192fa6388
2021-02-01T14:22:10.970 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr Running command: /usr/sbin/vgcreate --force --yes ceph-e9d1b22e-afec-406a-8689-82e7b6237e17 /dev/sdd
2021-02-01T14:22:10.971 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr  stderr: selabel_open failed: No such file or directory
2021-02-01T14:22:10.971 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr   selabel_open failed: No such file or directory
2021-02-01T14:22:10.972 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr  stderr: selabel_open failed: No such file or directory
2021-02-01T14:22:10.972 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr  stdout: Physical volume "/dev/sdd" successfully created.
2021-02-01T14:22:10.973 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr  stdout: Volume group "ceph-e9d1b22e-afec-406a-8689-82e7b6237e17" successfully created
2021-02-01T14:22:10.973 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr Running command: /usr/sbin/lvcreate --yes -l 57234 -n osd-block-0ea76097-69f6-4e59-8a09-c94192fa6388 ceph-e9d1b22e-afec-406a-8689-82e7b6237e17
2021-02-01T14:22:10.974 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr  stderr: selabel_open failed: No such file or directory
2021-02-01T14:22:10.974 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr  stderr: selabel_open failed: No such file or directory
2021-02-01T14:22:10.975 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr  stderr: Volume group "ceph-e9d1b22e-afec-406a-8689-82e7b6237e17" has insufficient free space (57233 extents): 57234 required.
2021-02-01T14:22:10.976 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr --> Was unable to complete a new OSD, will rollback changes
2021-02-01T14:22:10.976 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
2021-02-01T14:22:10.977 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr  stderr: purged osd.0
2021-02-01T14:22:10.977 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr -->  RuntimeError: command returned non-zero exit status: 5
2021-02-01T14:22:10.978 INFO:teuthology.orchestra.run.clara001.stderr:Traceback (most recent call last):
2021-02-01T14:22:10.978 INFO:teuthology.orchestra.run.clara001.stderr:  File "<stdin>", line 7582, in <module>
2021-02-01T14:22:10.979 INFO:teuthology.orchestra.run.clara001.stderr:  File "<stdin>", line 7571, in main
2021-02-01T14:22:10.980 INFO:teuthology.orchestra.run.clara001.stderr:  File "<stdin>", line 1566, in _infer_fsid
2021-02-01T14:22:10.980 INFO:teuthology.orchestra.run.clara001.stderr:  File "<stdin>", line 1650, in _infer_image
2021-02-01T14:22:10.981 INFO:teuthology.orchestra.run.clara001.stderr:  File "<stdin>", line 4180, in command_ceph_volume
2021-02-01T14:22:10.981 INFO:teuthology.orchestra.run.clara001.stderr:  File "<stdin>", line 1329, in call_throws
2021-02-01T14:22:10.982 INFO:teuthology.orchestra.run.clara001.stderr:RuntimeError: Failed command: /bin/podman run --rm --ipc=host --authfile=/etc/ceph/podman-auth.json --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk -e CONTAINER_IMAGE=registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-26800-20210129231628 -e NODE_NAME=clara001 -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -v /var/run/ceph/f7fad68c-64c1-11eb-95d0-002590fc2776:/var/run/ceph:z -v /var/log/ceph/f7fad68c-64c1-11eb-95d0-002590fc2776:/var/log/ceph:z -v /var/lib/ceph/f7fad68c-64c1-11eb-95d0-002590fc2776/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /tmp/ceph-tmpdcyn5du_:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpq_98hn68:/var/lib/ceph/bootstrap-osd/ceph.keyring:z registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-26800-20210129231628 lvm batch --no-auto /dev/sdd --yes --no-systemd
2021-02-01T14:22:10.982 INFO:teuthology.orchestra.run.clara001.stderr:
2021-02-01T14:22:11.060 DEBUG:teuthology.orchestra.run:got remote process result: 22


Version-Release number of selected component (if applicable):
Container: ceph-5.0-rhel-8-containers-candidate-26800-20210129231628
Compose Id: RHCEPH-5.0-RHEL-8-20210129.ci.8

Comment 16 Preethi 2021-02-03 14:35:27 UTC
@Juan, We are seeing ceph orch osd add failures in Teuthology runs with latest compose. 

http://magna002.ceph.redhat.com/pnataraj-2021-02-01_14:05:06-smoke:cephadm-master-distro-basic-clara/395962/tasks/rhcephadm-1.log

Output snippet:
2021-02-01T14:22:05.076 INFO:teuthology.orchestra.run.clara001.stderr:--> Zapping successful for: <Raw Device: /dev/sdd>
2021-02-01T14:22:05.225 DEBUG:teuthology.orchestra.run.clara001:> sudo /home/ubuntu/cephtest/cephadm --image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-26800-20210129231628 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f7fad68c-64c1-11eb-95d0-002590fc2776 -- ceph orch daemon add osd clara001:/dev/sdd
2021-02-01T14:22:06.231 INFO:journalctl.z.clara003.stdout:Feb 01 19:16:39 clara003 systemd[1]: Starting Ceph mgr.z for f7fad68c-64c1-11eb-95d0-002590fc2776...
2021-02-01T14:22:07.481 INFO:journalctl.z.clara003.stdout:Feb 01 19:16:40 clara003 bash[13769]: d99e4e2c26ea48e84299b761d86be9f701296c32bc1d11bd276ea2a06c6305e1
2021-02-01T14:22:07.481 INFO:journalctl.z.clara003.stdout:Feb 01 19:16:40 clara003 systemd[1]: Started Ceph mgr.z for f7fad68c-64c1-11eb-95d0-002590fc2776.
2021-02-01T14:22:09.233 INFO:journalctl.y.clara014.stdout:Feb 01 19:16:42 clara014 systemd[1]: Starting Ceph mgr.y for f7fad68c-64c1-11eb-95d0-002590fc2776...
2021-02-01T14:22:10.233 INFO:journalctl.y.clara014.stdout:Feb 01 19:16:43 clara014 bash[13324]: 4b80975f94699ae0abcba5bb1d2b62e3dda0f716212dfe171eda50ef8cb51553
2021-02-01T14:22:10.234 INFO:journalctl.y.clara014.stdout:Feb 01 19:16:43 clara014 systemd[1]: Started Ceph mgr.y for f7fad68c-64c1-11eb-95d0-002590fc2776.
2021-02-01T14:22:10.959 INFO:teuthology.orchestra.run.clara001.stderr:Error EINVAL: Traceback (most recent call last):
2021-02-01T14:22:10.960 INFO:teuthology.orchestra.run.clara001.stderr:  File "/usr/share/ceph/mgr/mgr_module.py", line 1269, in _handle_command
2021-02-01T14:22:10.960 INFO:teuthology.orchestra.run.clara001.stderr:    return self.handle_command(inbuf, cmd)
2021-02-01T14:22:10.961 INFO:teuthology.orchestra.run.clara001.stderr:  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 150, in handle_command
2021-02-01T14:22:10.961 INFO:teuthology.orchestra.run.clara001.stderr:    return dispatch[cmd['prefix']].call(self, cmd, inbuf)
2021-02-01T14:22:10.962 INFO:teuthology.orchestra.run.clara001.stderr:  File "/usr/share/ceph/mgr/mgr_module.py", line 380, in call
2021-02-01T14:22:10.963 INFO:teuthology.orchestra.run.clara001.stderr:    return self.func(mgr, **kwargs)
2021-02-01T14:22:10.963 INFO:teuthology.orchestra.run.clara001.stderr:  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 108, in <lambda>
2021-02-01T14:22:10.964 INFO:teuthology.orchestra.run.clara001.stderr:    wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
2021-02-01T14:22:10.964 INFO:teuthology.orchestra.run.clara001.stderr:  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 97, in wrapper
2021-02-01T14:22:10.965 INFO:teuthology.orchestra.run.clara001.stderr:    return func(*args, **kwargs)
2021-02-01T14:22:10.965 INFO:teuthology.orchestra.run.clara001.stderr:  File "/usr/share/ceph/mgr/orchestrator/module.py", line 823, in _daemon_add_osd
2021-02-01T14:22:10.966 INFO:teuthology.orchestra.run.clara001.stderr:    raise_if_exception(completion)
2021-02-01T14:22:10.967 INFO:teuthology.orchestra.run.clara001.stderr:  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 652, in raise_if_exception
2021-02-01T14:22:10.967 INFO:teuthology.orchestra.run.clara001.stderr:    raise e
2021-02-01T14:22:10.968 INFO:teuthology.orchestra.run.clara001.stderr:RuntimeError: cephadm exited with an error code: 1, stderr:/bin/podman: stderr --> passed data devices: 1 physical, 0 LVM
2021-02-01T14:22:10.968 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr --> relative data size: 1.0
2021-02-01T14:22:10.969 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr Running command: /usr/bin/ceph-authtool --gen-print-key
2021-02-01T14:22:10.969 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 0ea76097-69f6-4e59-8a09-c94192fa6388
2021-02-01T14:22:10.970 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr Running command: /usr/sbin/vgcreate --force --yes ceph-e9d1b22e-afec-406a-8689-82e7b6237e17 /dev/sdd
2021-02-01T14:22:10.971 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr  stderr: selabel_open failed: No such file or directory
2021-02-01T14:22:10.971 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr   selabel_open failed: No such file or directory
2021-02-01T14:22:10.972 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr  stderr: selabel_open failed: No such file or directory
2021-02-01T14:22:10.972 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr  stdout: Physical volume "/dev/sdd" successfully created.
2021-02-01T14:22:10.973 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr  stdout: Volume group "ceph-e9d1b22e-afec-406a-8689-82e7b6237e17" successfully created
2021-02-01T14:22:10.973 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr Running command: /usr/sbin/lvcreate --yes -l 57234 -n osd-block-0ea76097-69f6-4e59-8a09-c94192fa6388 ceph-e9d1b22e-afec-406a-8689-82e7b6237e17
2021-02-01T14:22:10.974 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr  stderr: selabel_open failed: No such file or directory
2021-02-01T14:22:10.974 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr  stderr: selabel_open failed: No such file or directory
2021-02-01T14:22:10.975 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr  stderr: Volume group "ceph-e9d1b22e-afec-406a-8689-82e7b6237e17" has insufficient free space (57233 extents): 57234 required.
2021-02-01T14:22:10.976 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr --> Was unable to complete a new OSD, will rollback changes
2021-02-01T14:22:10.976 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
2021-02-01T14:22:10.977 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr  stderr: purged osd.0
2021-02-01T14:22:10.977 INFO:teuthology.orchestra.run.clara001.stderr:/bin/podman: stderr -->  RuntimeError: command returned non-zero exit status: 5
2021-02-01T14:22:10.978 INFO:teuthology.orchestra.run.clara001.stderr:Traceback (most recent call last):
2021-02-01T14:22:10.978 INFO:teuthology.orchestra.run.clara001.stderr:  File "<stdin>", line 7582, in <module>
2021-02-01T14:22:10.979 INFO:teuthology.orchestra.run.clara001.stderr:  File "<stdin>", line 7571, in main
2021-02-01T14:22:10.980 INFO:teuthology.orchestra.run.clara001.stderr:  File "<stdin>", line 1566, in _infer_fsid
2021-02-01T14:22:10.980 INFO:teuthology.orchestra.run.clara001.stderr:  File "<stdin>", line 1650, in _infer_image
2021-02-01T14:22:10.981 INFO:teuthology.orchestra.run.clara001.stderr:  File "<stdin>", line 4180, in command_ceph_volume
2021-02-01T14:22:10.981 INFO:teuthology.orchestra.run.clara001.stderr:  File "<stdin>", line 1329, in call_throws
2021-02-01T14:22:10.982 INFO:teuthology.orchestra.run.clara001.stderr:RuntimeError: Failed command: /bin/podman run --rm --ipc=host --authfile=/etc/ceph/podman-auth.json --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk -e CONTAINER_IMAGE=registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-26800-20210129231628 -e NODE_NAME=clara001 -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -v /var/run/ceph/f7fad68c-64c1-11eb-95d0-002590fc2776:/var/run/ceph:z -v /var/log/ceph/f7fad68c-64c1-11eb-95d0-002590fc2776:/var/log/ceph:z -v /var/lib/ceph/f7fad68c-64c1-11eb-95d0-002590fc2776/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /tmp/ceph-tmpdcyn5du_:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpq_98hn68:/var/lib/ceph/bootstrap-osd/ceph.keyring:z registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-26800-20210129231628 lvm batch --no-auto /dev/sdd --yes --no-systemd
2021-02-01T14:22:10.982 INFO:teuthology.orchestra.run.clara001.stderr:
2021-02-01T14:22:11.060 DEBUG:teuthology.orchestra.run:got remote process result: 22


Version-Release number of selected component (if applicable):
Container: ceph-5.0-rhel-8-containers-candidate-26800-20210129231628
Compose Id: RHCEPH-5.0-RHEL-8-20210129.ci.8

Comment 17 Juan Miguel Olmo 2021-02-11 09:24:42 UTC
Merged in upstream pacific. Next alpha refresh will have this fix

Comment 19 Juan Miguel Olmo 2021-02-24 17:04:22 UTC
downstream alfa images refreshed on 15th Feb or 22nd Feb must contain the fix for this issue.

Because the problem described in:
https://bugzilla.redhat.com/show_bug.cgi?id=1923719

if you want to verify this issue you will need to use the workaround and disabling selinux in your hosts

Comment 24 errata-xmlrpc 2021-08-30 08:26:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.