Bug 1298738 - block device /dev/rbd/$pool/$image is not present
block device /dev/rbd/$pool/$image is not present
Status: CLOSED NOTABUG
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RBD (Show other bugs)
1.3.2
x86_64 Linux
unspecified Severity medium
: rc
: 1.3.2
Assigned To: Josh Durgin
ceph-qe-bugs
: Documentation, ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-01-14 16:21 EST by Rachana Patel
Modified: 2017-07-30 11:28 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-02-12 14:42:33 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Rachana Patel 2016-01-14 16:21:27 EST
Description of problem:
======================
In Installation document, chapeter - BLOCK DEVICE QUICK START has example to create fs using block device.
"
Use the block device by creating a file system on the ceph-client node.
sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo
"

Couldn't find /dev/rbd/rbd/foo and this command fails with error'Could not stat /dev/rbd/rbd/foo--- Not a directory


Version-Release number of selected component (if applicable):
=============================================================
0.94.5-1.el7cp.x86_64

How reproducible:
=================
always

Steps to Reproduce:
1.follow the instruction mentioned in 'BLOCK DEVICE QUICK START'
2.below steps are failing
===
4. Use the block device by creating a file system on the ceph-client node.
sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo
This may take a few moments.
5. Mount the file system on the ceph-client node.
sudo mount /dev/rbd/rbd/foo /mnt/ceph-block-device
===

Actual results:
==============
unable to create fs and unable to mount.


Additional info:
=================
I should be '/dev/rbd#' instead of '/dev/rbd/rbd/foo'
Comment 2 Ken Dreyer (Red Hat) 2016-02-03 22:37:10 EST
Jason/Josh, could you look briefly over the suggested change and confirm that /dev/rbd# is correct?
Comment 3 Josh Durgin 2016-02-03 22:51:39 EST
The documented '/dev/rbd/$pool/$image' path is correct - that it isn't present indicates that the udev rule setting it up was not installed or not working.
Comment 7 Ken Dreyer (Red Hat) 2016-02-09 13:17:59 EST
Rachana, can you reproduce this bug? Can you also verify that 50-rbd.rules is present on your system?
Comment 10 Ken Dreyer (Red Hat) 2016-02-09 14:19:34 EST
Could this be an issue in ceph-rbdnamer ? I don't know how to debug udev issues :(
Comment 11 Jason Dillaman 2016-02-09 14:34:38 EST
If you run "/usr/bin/ceph-rbdnamer <device number>", you should be able to see it dump the pool and image name:

# /usr/bin/ceph-rbdnamer 0
rbd foo

You can test udev via:

# sudo udevadm test --action=add `udevadm info -q path -n /dev/rbdX`
... snip ...
.ID_FS_TYPE_NEW=
ACTION=add
DEVLINKS=/dev/rbd/rbd/foo
DEVNAME=/dev/rbd0
DEVPATH=/devices/virtual/block/rbd0
DEVTYPE=disk
ID_FS_TYPE=
MAJOR=251
MINOR=0
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=699743934832

The important line is the DEVLINKS entry showing that it is creating the necessary symlink.
Comment 12 Rachana Patel 2016-02-12 14:42:33 EST
Unable to reproduce with current build,with latest build link is getting created so closing bug.

re-testing done with version:-
ceph-deploy-1.5.27.4-3.el7cp.noarch
ceph-common-0.94.5-9.el7cp.x86_64

Note You need to log in before you can comment on or make changes to this bug.