Bug 2401192 - [ceph-volume] ValueError: too many values to unpack (expected 2)
Summary: [ceph-volume] ValueError: too many values to unpack (expected 2)
Keywords:
Status: CLOSED DUPLICATE of bug 2400637
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 8.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 9.0
Assignee: Guillaume Abrioux
QA Contact: Aditya Ramteke
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2025-10-03 03:10 UTC by Santosh Pillai
Modified: 2025-10-03 03:17 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2025-10-03 03:17:51 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-12269 0 None None None 2025-10-03 03:11:34 UTC

Description Santosh Pillai 2025-10-03 03:10:44 UTC
Description of problem:

ceph-volume commands are failing. 

$ oc logs rook-ceph-operator-7bd65885cf-c8m4c -n openshift-storage
...
2025-09-29 15:10:38.378323 E | op-osd: failed to provision OSD(s) on PVC ocs-deviceset-localblock-2-data-3zchkf. &{OSDs:[] Status:failed PvcBackedOSD:true Message:failed to configure devices: failed to initialize devices on PVC: failed to run ceph-volume. usage: ceph-volume raw prepare [-h] [--objectstore {bluestore,seastore}]
                               --data DATA [--bluestore]
                               [--crush-device-class CRUSH_DEVICE_CLASS]
                               [--no-tmpfs] [--block.db BLOCK_DB]
                               [--block.wal BLOCK_WAL] [--dmcrypt]
                               [--with-tpm] [--osd-id OSD_ID]
ceph-volume raw prepare: error: argument --data: invalid <ceph_volume.util.arg_validators.ValidRawDevice object at 0x7f7d4304b160> value: '/mnt/ocs-deviceset-localblock-2-data-3zchkf'. debug logs below:
[2025-09-29 15:10:30,794][ceph_volume.main][INFO  ] Running command: ceph-volume --log-path /var/log/ceph/ocs-deviceset-localblock-2-data-3zchkf raw prepare --bluestore --data /mnt/ocs-deviceset-localblock-2-data-3zchkf --crush-device-class ssd --dmcrypt
[2025-09-29 15:10:32,789][ceph_volume.main][INFO  ] Running command: ceph-volume --log-path /var/log/ceph/ocs-deviceset-localblock-2-data-3zchkf raw prepare --bluestore --data /mnt/ocs-deviceset-localblock-2-data-3zchkf --crush-device-class ssd --dmcrypt
: exit status 2}
2025-09-29 15:10:38.776549 I | op-osd: OSD orchestration status for PVC ocs-deviceset-localblock-2-data-57v2tn is "failed"
2025-09-29 15:10:38.776575 E | op-osd: failed to provision OSD(s) on PVC ocs-deviceset-localblock-2-data-57v2tn. &{OSDs:[] Status:failed PvcBackedOSD:true Message:failed to configure devices: failed to initialize devices on PVC: failed to run ceph-volume. usage: ceph-volume raw prepare [-h] [--objectstore {bluestore,seastore}]
                               --data DATA [--bluestore]
                               [--crush-device-class CRUSH_DEVICE_CLASS]
                               [--no-tmpfs] [--block.db BLOCK_DB]
                               [--block.wal BLOCK_WAL] [--dmcrypt]
                               [--with-tpm] [--osd-id OSD_ID]
ceph-volume raw prepare: error: argument --data: invalid <ceph_volume.util.arg_validators.ValidRawDevice object at 0x7fd8f86a7160> value: '/mnt/ocs-deviceset-localblock-2-data-57v2tn'. debug logs below:
[2025-09-29 15:10:32,954][ceph_volume.main][INFO  ] Running command: ceph-volume --log-path /var/log/ceph/ocs-deviceset-localblock-2-data-57v2tn raw prepare --bluestore --data /mnt/ocs-deviceset-localblock-2-data-57v2tn --crush-device-class ssd --dmcrypt
: exit status 2}
2025-09-29 15:10:39.177716 I | op-osd: OSD orchestration status for node ocs-deviceset-localblock-2-data-4s7wv4 is "orchestrating"
2025-09-29 15:10:39.177805 I | op-osd: OSD orchestration status for node ocs-deviceset-localblock-1-data-4j8rlj is "orchestrating"
2025-09-29 15:10:39.177852 I | op-osd: OSD orchestration status for PVC ocs-deviceset-localblock-2-data-4s7wv4 is "orchestrating"
2025-09-29 15:10:39.177864 I | op-osd: OSD orchestration status for PVC ocs-deviceset-localblock-1-data-4j8rlj is "orchestrating"
2025-09-29 15:10:39.177907 I | op-osd: OSD orchestration status for node ocs-deviceset-localblock-2-data-57v2tn is "orchestrating"
2025-09-29 15:10:39.179539 I | op-osd: OSD orchestration status for PVC ocs-deviceset-localblock-1-data-1wd8nx is "completed"
2025-09-29 15:10:39.179559 I | op-osd: creating OSD 7 on PVC "ocs-deviceset-localblock-1-data-1wd8nx"
2025-09-29 15:10:39.179569 I | op-osd: OSD will have its main bluestore block on "ocs-deviceset-localblock-1-data-1wd8nx"
2025-09-29 15:10:40.390079 I | op-osd: OSD orchestration status for PVC ocs-deviceset-localblock-1-data-5d9pml is "failed"
2025-09-29 15:10:40.392840 E | op-osd: failed to provision OSD(s) on PVC ocs-deviceset-localblock-1-data-5d9pml. &{OSDs:[] Status:failed PvcBackedOSD:true Message:failed to configure devices: failed to initialize devices on PVC: failed to run ceph-volume. Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 12e8f979-9b98-4c5a-89e6-49de8c03d8b6
Running command: /usr/sbin/cryptsetup --batch-mode --key-size 512 --key-file - luksFormat /mnt/ocs-deviceset-localblock-1-data-5d9pml
--> Was unable to complete a new OSD, will rollback changes
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.11 --yes-i-really-mean-it
 stderr: purged osd.11
--> No OSD identified by "11" was found among LVM-based OSDs.
--> Proceeding to check RAW-based OSDs.
Traceback (most recent call last):
  File "/usr/lib/python3.9/site-packages/ceph_volume/objectstore/rawbluestore.py", line 74, in safe_prepare
    self.prepare()
  File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3.9/site-packages/ceph_volume/objectstore/rawbluestore.py", line 102, in prepare
    self.prepare_dmcrypt()
  File "/usr/lib/python3.9/site-packages/ceph_volume/objectstore/rawbluestore.py", line 53, in prepare_dmcrypt
    encryption_utils.luks_open(
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/encryption.py", line 219, in luks_open
    if bypass_workqueue(device):
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/encryption.py", line 67, in bypass_workqueue
    return not Device(device).rotational and conf.dmcrypt_no_workqueue
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/device.py", line 129, in __init__
    sys_info.devices = disk.get_devices()
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py", line 861, in get_devices
    udev_data = UdevData(sysdir)
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py", line 1360, in __init__
    key, value = data.split('=')
ValueError: too many values to unpack (expected 2)During handling of the above exception, another exception occurred:Traceback (most recent call last):
  File "/usr/sbin/ceph-volume", line 33, in <module>
    sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())
  File "/usr/lib/python3.9/site-packages/ceph_volume/main.py", line 54, in __init__
    self.main(self.argv)
  File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 59, in newfunc
    return f(*a, **kw)
  File "/usr/lib/python3.9/site-packages/ceph_volume/main.py", line 166, in main
    terminal.dispatch(self.mapper, subcommand_args)
  File "/usr/lib/python3.9/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/raw/main.py", line 32, in main
    terminal.dispatch(self.mapper, self.argv)
  File "/usr/lib/python3.9/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/raw/prepare.py", line 54, in main
    self.objectstore.safe_prepare(self.args)
  File "/usr/lib/python3.9/site-packages/ceph_volume/objectstore/rawbluestore.py", line 78, in safe_prepare
    rollback_osd(self.osd_id)
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/common.py", line 36, in rollback_osd
    Zap(['--destroy', '--osd-id', osd_id]).main()
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/zap.py", line 577, in main
    self.zap_osd()
  File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/zap.py", line 431, in zap_osd
    self.args.devices = self.find_associated_devices()
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/zap.py", line 156, in find_associated_devices
    raw_osds: Dict[str, Any] = direct_report()
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/raw/list.py", line 23, in direct_report
    return _list.generate(devices)
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/raw/list.py", line 105, in generate
    self.exclude_lvm_osd_devices()
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/raw/list.py", line 81, in exclude_lvm_osd_devices
    self.devices_to_scan = [device for device in filtered_devices_to_scan if device is not None]
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/raw/list.py", line 81, in <listcomp>
    self.devices_to_scan = [device for device in filtered_devices_to_scan if device is not None]
  File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 609, in result_iterator
    yield fs.pop().result()
  File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 446, in result
    return self.__get_result()
  File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
    raise self._exception
  File "/usr/lib64/python3.9/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/raw/list.py", line 84, in filter_lvm_osd_devices
    d = Device(device)
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/device.py", line 129, in __init__
    sys_info.devices = disk.get_devices()
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py", line 861, in get_devices
    udev_data = UdevData(sysdir)
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py", line 1360, in __init__
    key, value = data.split('=')
ValueError: too many values to unpack (expected 2). debug logs below:
[2025-09-29 15:10:25,843][ceph_volume.main][INFO  ] Running command: ceph-volume --log-path /var/log/ceph/ocs-deviceset-localblock-1-data-5d9pml raw prepare --bluestore --data /mnt/ocs-deviceset-localblock-1-data-5d9pml --crush-device-class ssd --dmcrypt
... 

---------------------------------------
Another instance where the failure was observed when listing raw devices. 


2025-10-02 05:35:18.723189 I | cephosd: checking for OSD disks from a different cluster
2025-10-02 05:35:18.723428 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log raw list --format json
2025-10-02 05:35:21.375802 E | cephosd: . Traceback (most recent call last):
  File "/usr/sbin/ceph-volume", line 33, in <module>
    sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())
  File "/usr/lib/python3.9/site-packages/ceph_volume/main.py", line 54, in __init__
    self.main(self.argv)
  File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 59, in newfunc
    return f(*a, **kw)
  File "/usr/lib/python3.9/site-packages/ceph_volume/main.py", line 166, in main
    terminal.dispatch(self.mapper, subcommand_args)
  File "/usr/lib/python3.9/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/raw/main.py", line 32, in main
    terminal.dispatch(self.mapper, self.argv)
  File "/usr/lib/python3.9/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/raw/list.py", line 162, in main
    self.list(args)
  File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/raw/list.py", line 118, in list
    report = self.generate(args.device)
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/raw/list.py", line 105, in generate
    self.exclude_lvm_osd_devices()
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/raw/list.py", line 81, in exclude_lvm_osd_devices
    self.devices_to_scan = [device for device in filtered_devices_to_scan if device is not None]
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/raw/list.py", line 81, in <listcomp>
    self.devices_to_scan = [device for device in filtered_devices_to_scan if device is not None]
  File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 609, in result_iterator
    yield fs.pop().result()
  File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 446, in result
    return self.__get_result()
  File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
    raise self._exception
  File "/usr/lib64/python3.9/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/raw/list.py", line 84, in filter_lvm_osd_devices
    d = Device(device)
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/device.py", line 129, in __init__
    sys_info.devices = disk.get_devices()
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py", line 861, in get_devices
    udev_data = UdevData(sysdir)
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py", line 1360, in __init__
    key, value = data.split('=')
ValueError: too many values to unpack (expected 2)




Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.Setup ODF 4.20.100
2.Observe the OSD prepare pod jobs
 

Actual results:
- OSD prepare jobs are failing with ceph-volume commands


Expected results:
- OSD prepare jobs should not fail. 


Additional info:

ODF versio: 4.20.100

Ceph Version: 
rhceph@sha256:24590b9ab8aebfeebfde344ec0c522164efd57c28f8f88692b91673c7f8f164a:
  version: 8
  release: 562
  upstream-vcs-ref: n/a
  nvr: rhceph-container-8-562


Note You need to log in before you can comment on or make changes to this bug.