Bug 1298738 - block device /dev/rbd/$pool/$image is not present
Summary: block device /dev/rbd/$pool/$image is not present
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RBD
Version: 1.3.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: 1.3.2
Assignee: Josh Durgin
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-01-14 21:21 UTC by Rachana Patel
Modified: 2017-07-30 15:28 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-02-12 19:42:33 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Rachana Patel 2016-01-14 21:21:27 UTC
Description of problem:
======================
In Installation document, chapeter - BLOCK DEVICE QUICK START has example to create fs using block device.
"
Use the block device by creating a file system on the ceph-client node.
sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo
"

Couldn't find /dev/rbd/rbd/foo and this command fails with error'Could not stat /dev/rbd/rbd/foo--- Not a directory


Version-Release number of selected component (if applicable):
=============================================================
0.94.5-1.el7cp.x86_64

How reproducible:
=================
always

Steps to Reproduce:
1.follow the instruction mentioned in 'BLOCK DEVICE QUICK START'
2.below steps are failing
===
4. Use the block device by creating a file system on the ceph-client node.
sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo
This may take a few moments.
5. Mount the file system on the ceph-client node.
sudo mount /dev/rbd/rbd/foo /mnt/ceph-block-device
===

Actual results:
==============
unable to create fs and unable to mount.


Additional info:
=================
I should be '/dev/rbd#' instead of '/dev/rbd/rbd/foo'

Comment 2 Ken Dreyer (Red Hat) 2016-02-04 03:37:10 UTC
Jason/Josh, could you look briefly over the suggested change and confirm that /dev/rbd# is correct?

Comment 3 Josh Durgin 2016-02-04 03:51:39 UTC
The documented '/dev/rbd/$pool/$image' path is correct - that it isn't present indicates that the udev rule setting it up was not installed or not working.

Comment 7 Ken Dreyer (Red Hat) 2016-02-09 18:17:59 UTC
Rachana, can you reproduce this bug? Can you also verify that 50-rbd.rules is present on your system?

Comment 10 Ken Dreyer (Red Hat) 2016-02-09 19:19:34 UTC
Could this be an issue in ceph-rbdnamer ? I don't know how to debug udev issues :(

Comment 11 Jason Dillaman 2016-02-09 19:34:38 UTC
If you run "/usr/bin/ceph-rbdnamer <device number>", you should be able to see it dump the pool and image name:

# /usr/bin/ceph-rbdnamer 0
rbd foo

You can test udev via:

# sudo udevadm test --action=add `udevadm info -q path -n /dev/rbdX`
... snip ...
.ID_FS_TYPE_NEW=
ACTION=add
DEVLINKS=/dev/rbd/rbd/foo
DEVNAME=/dev/rbd0
DEVPATH=/devices/virtual/block/rbd0
DEVTYPE=disk
ID_FS_TYPE=
MAJOR=251
MINOR=0
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=699743934832

The important line is the DEVLINKS entry showing that it is creating the necessary symlink.

Comment 12 Rachana Patel 2016-02-12 19:42:33 UTC
Unable to reproduce with current build,with latest build link is getting created so closing bug.

re-testing done with version:-
ceph-deploy-1.5.27.4-3.el7cp.noarch
ceph-common-0.94.5-9.el7cp.x86_64


Note You need to log in before you can comment on or make changes to this bug.