Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1336756 - ceph-disk prepare: Error: partprobe /dev/vdb failed : Error: Error informing the kernel about modifications to partition /dev/vdb1 -- Device or resource busy.
ceph-disk prepare: Error: partprobe /dev/vdb failed : Error: Error informing ...
Status: CLOSED ERRATA
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Ceph-Disk (Show other bugs)
2.0
Unspecified Unspecified
urgent Severity urgent
: rc
: 2.0
Assigned To: Ken Dreyer (Red Hat)
Daniel Horák
:
Depends On:
Blocks: 1339705
  Show dependency treegraph
 
Reported: 2016-05-17 08:07 EDT by Daniel Horák
Modified: 2017-07-31 17:05 EDT (History)
7 users (show)

See Also:
Fixed In Version: parted-3.1-26.el7 ceph-10.2.1-11.el7cp
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1339705 (view as bug list)
Environment:
Last Closed: 2016-08-23 15:38:42 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Ceph Project Bug Tracker 15176 None None None 2016-05-23 04:47 EDT
Red Hat Product Errata RHBA-2016:1755 normal SHIPPED_LIVE Red Hat Ceph Storage 2.0 bug fix and enhancement update 2016-08-23 19:23:52 EDT

  None (edit)
Description Daniel Horák 2016-05-17 08:07:23 EDT
Description of problem:
  Command `ceph-disk prepare ...` sometimes fails to prepare disk for a Ceph OSD with following error:
    ceph-disk: Error: partprobe /dev/vdb failed : Error: Error informing the kernel about modifications to partition /dev/vdb1 -- Device or resource busy.  This means Linux won't know about any changes you made to /dev/vdb1 until you reboot -- so you shouldn't mount it or use it in any way before rebooting.
    Error: Failed to add partition 1 (Device or resource busy)

Version-Release number of selected component (if applicable):
  ceph-base-10.2.0-1.el7cp.x86_64
  ceph-common-10.2.0-1.el7cp.x86_64
  ceph-osd-10.2.0-1.el7cp.x86_64
  ceph-selinux-10.2.0-1.el7cp.x86_64
  libcephfs1-10.2.0-1.el7cp.x86_64
  python-cephfs-10.2.0-1.el7cp.x86_64

How reproducible:
  40% on our VMs

Steps to Reproduce:
1. Create and install node for Ceph OSD with at least two spare disks.

2. Run command for disk preparation for a Ceph OSD.
  Device /dev/vdb is targeted for journal, /dev/vdc for OSD data. If you have more spare disks, you might try to repeat this command for each "OSD data" device.

  # ceph-disk prepare --cluster ceph /dev/vdc /dev/vdb

3. Before trying again, clean up both the journal and OSD data devices:
  # sgdisk --zap-all --clear --mbrtogpt -g -- /dev/vdb
  # sgdisk --zap-all --clear --mbrtogpt -g -- /dev/vdc

Actual results:
  Sometimes the ceph-disk command fails with following (or similar) error:
  # ceph-disk prepare --cluster ceph /dev/vdc /dev/vdb
    prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
    The operation has completed successfully.
    ceph-disk: Error: partprobe /dev/vdb failed : Error: Error informing the kernel about modifications to partition /dev/vdb1 -- Device or resource busy.  This means Linux won't know about any changes you made to /dev/vdb1 until you reboot -- so you shouldn't mount it or use it in any way before rebooting.
    Error: Failed to add partition 1 (Device or resource busy)
  # echo $?
    1

Expected results:
  Command ceph-disk should properly prepare the disk for Ceph OSD.

Additional info:
  I discovered this issue while testing USM.
Comment 3 Loic Dachary 2016-05-18 03:28:03 EDT
Thanks for the steps to reproduce, that's very helpful. I think I have all the information I need now.
Comment 4 Daniel Horák 2016-05-18 03:51:19 EDT
small update/note: I didn't see this issue and wasn't able to reproduce it on VMs in OpenStack environment, but I saw it (and can reproduce it) on kvm VMs running on our (different) physical servers.
Comment 5 Daniel Horák 2016-05-19 03:25:42 EDT
I've tried to update parted from Fedora 22 (to version parted-3.2-16.fc22) - as it was suggested in the upstream issue[1] - and I can confirm that it fixes the issue.

Originally with parted-3.1-23.el7.x86_64 it was failing.

[1] http://tracker.ceph.com/issues/15918
Comment 6 Ken Dreyer (Red Hat) 2016-05-19 15:16:28 EDT
https://github.com/ceph/ceph/pull/9195 is the PR to master; still undergoing review upstream.
Comment 7 Loic Dachary 2016-05-24 03:35:27 EDT
Hi Daniel,

Would you be so kind as to provide me with access to a machine where I can reproduce the problem. I've collected enough expertise now to make use of it. And I can't seem to reproduce it on a CentOS 7.2 (VM or bare metal).

Thanks !
Comment 8 monti lawrence 2016-05-24 11:54:37 EDT
See comment 7. This BZ needs to be resolved ASAP as it is a blocker for Beta 1 (5/31).
Comment 13 Brian Lane 2016-05-25 12:40:03 EDT
I've opened bug 1339705 against parted to track an improvement in partprobe.
Comment 18 Harish NV Rao 2016-06-13 07:19:32 EDT
qa_ack given
Comment 19 Daniel Horák 2016-06-20 08:53:49 EDT
Tested and VERIFIED on VMs accordingly to comment 0 and comment 9 with following packages:

# rpm -qa parted ceph-osd
  ceph-osd-10.2.2-5.el7cp.x86_64
  parted-3.1-26.el7.x86_64

I'll try to retest it also on real HW.
Comment 20 Daniel Horák 2016-06-21 04:14:17 EDT
Tested on physical HW server without any problem.
With following packages:
  parted-3.1-26.el7.x86_64
  ceph-osd-10.2.2-2.el7cp.x86_64

>> VERIFIED
Comment 22 errata-xmlrpc 2016-08-23 15:38:42 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1755.html

Note You need to log in before you can comment on or make changes to this bug.