Bug 2231360 - Out of 3, only 2 OSDs are added after adding capacity
Summary: Out of 3, only 2 OSDs are added after adding capacity
Keywords:
Status: NEW
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: rook
Version: 4.13
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ---
Assignee: Santosh Pillai
QA Contact: Neha Berry
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-08-11 11:34 UTC by Aman Agrawal
Modified: 2023-08-15 15:20 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)

Comment 4 Santosh Pillai 2023-08-11 15:06:18 UTC
OSD prepare pod is stuck while running ceph-volume prepare command.
The backing disk for this pv is /dev/sdd on compute-1 node. 
from ceph-volume logs on compute-1 node where this OSD prepare pod is stuck:

``` [2023-08-11 10:11:21,471][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdd
[2023-08-11 10:11:21,492][ceph_volume.process][INFO  ] stderr unable to read label for /dev/sdd: (2) No such file or directory
[2023-08-11 10:11:21,492][ceph_volume.devices.raw.list][DEBUG ] assuming device /dev/sdd is not BlueStore; ceph-bluestore-tool failed to get info from device: []
['unable to read label for /dev/sdd: (2) No such file or directory']
[2023-08-11 10:11:21,492][ceph_volume.devices.raw.list][INFO  ] device /dev/sdd does not have BlueStore information
```

Could be something to do with the disk

Comment 5 Santosh Pillai 2023-08-11 15:20:02 UTC
ceph-volume inventory on this disk:

``` sh-5.1# ceph-volume inventory /dev/sdd
 stderr: lsblk: /dev/sdd: not a block device
Traceback (most recent call last):
  File "/usr/sbin/ceph-volume", line 33, in <module>
    sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())
  File "/usr/lib/python3.9/site-packages/ceph_volume/main.py", line 41, in __init__
    self.main(self.argv)
  File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 59, in newfunc
    return f(*a, **kw)
  File "/usr/lib/python3.9/site-packages/ceph_volume/main.py", line 153, in main
    terminal.dispatch(self.mapper, subcommand_args)
  File "/usr/lib/python3.9/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.9/site-packages/ceph_volume/inventory/main.py", line 50, in main
    self.format_report(Device(self.args.path, with_lsm=self.args.with_lsm))
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/device.py", line 131, in __init__
    self._parse()
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/device.py", line 225, in _parse
    dev = disk.lsblk(self.path)
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py", line 243, in lsblk
    result = lsblk_all(device=device,
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py", line 337, in lsblk_all
    raise RuntimeError(f"Error: {err}")
RuntimeError: Error: ['lsblk: /dev/sdd: not a block device']
```

Comment 6 Santosh Pillai 2023-08-11 15:24:43 UTC
sh-5.1# lsblk /dev/sdd
lsblk: /dev/sdd: not a block device
sh-5.1#

Comment 7 Travis Nielsen 2023-08-15 15:20:57 UTC
Moving out of 4.14 while the bad disk is being investigate


Note You need to log in before you can comment on or make changes to this bug.